I feel like that was super common. Apart from changing the volumes of entire channels (e.g. changing the level of Line In vs. digital sound), volume was a relatively “global” thing.
And I’m not sure if that was still the case in 1997, but most likely changing the volume of digital sound meant the CPU having to process the samples in realtime. Now on one hand, that’s probably dwarfed by what the CPU had to do for decompressing the video. On the other hand, if you’re already starved for CPU time…
I mentioned this in another thread now, but it was definitely noteworthy to me that it did this since I was used to other programs not doing so, for example Winamp, I would also have thought Windows' Media Player did not do this, but I can't remember for certain.
Winamp had a software equalizer with a preamp, which was noteworthy. Are you sure changing the volume did not mean changing the preamp level in Winamp?
If you turned off the preamp (could be directly done in the EQ window I think), what did the volume control actually do?
Maybe we're not understanding each other correctly here.
It's 30 years ago now, but my recollection is that Winamp did not change Windows' global volume.
I am less certain, but I thought Windows' own Media Player similarly also did not change Windows' global volume.
What I definitely recall correctly is being surprised that Real Player would change the Windows' global volume and this would not have been so noteworthy to me unless it was unusual compared to other applications I typically used.
No, I get you. I'm stating that Winamp might have been "special" because it had a software equalizer, and its volume control might have actually changed the preamp level. This would be fairly unusual for other app of its time, and I also wondered what would happen if you turned the Preamp off with its big shiny button, and whether that would let the volume control control the global volume instead, or whether it maybe would disable the volume control entirely.
What I'm saying is: I still feel (perhaps wrongly, quite possibly so) that in 1997, changing the global volume was more common, and that even being able to change app-specific volumes required some non-trivial features from the app who can do so.
Side note: virtual 8086 mode was protected mode, or rather, implied protected mode. A task could run in virtual 8086 mode where to the task it was (mostly) looking like it was running in real mode, when in actuality the kernel was running in full protected mode.
Note that the "kernel" was never DOS. It could often actually be a so called "memory manager", like EMM386, and the actual DOS OS (the entire thing, including apps, not just the DOS "kernel") would run as a sole vm86 task, without any other tasks. The memory manager was then serving DOS with a lot of the 386 32 bit goodness through a straw, effectively.
It's very bizarre from today's (or even back then's) OS standards, and evolved that way because compatibility.
The virtualization itself is not the bizarre part. The bizarre part is where the actual OS is 16 bit and runs as the singular "task" of a thin 32 bit layer that merely calls itself a "memory manager". The details of that machinery (segmentation, DPMI, ...) are quite a sight to behold. And it's all because of how PCs evolved at that time, and because we needed to keep running DOS and still wanted to make use of all the extra memory that wouldn't fit into its address space.
I'm getting tired of typing this, but swap space is not just to increase available virtual memory. If you upgrade from 8 GB to 24 GB, then with proper swap space usage, you have 16 GB that could be used for additional disk cache.
Sure, you're still better off with 24 GB overall compared to 8GB+swap whether you add swap to your 24 GB or not, but swap can still make things more better.
(That says nothing about whether the 2x rule is still useful though, I have no idea.)
There's a chance that those servers might run more efficiently with some swap space, for the reasons mentioned many times in this thread. Swap space is not just for overcommitting.
The theories are repeated a often but I have never seen any empirical data to back it up assuming one is setting the options I mentioned. These anecdotes usually come from servers with default settings and no attempt to tune them for the intended workloads and no capacity planning for application resources. Even OS maintainers are starting to recognize this and have created daemons such as tuned for the people that never touch settings. The next evolution will be dynamic adjustments from continuous bpf traces. I just keep it simple and avoid the circular arguments all together.
Oh sure, it might or might not make a significant difference at all. Chances are, if you do a lot of I/O on a large (or very large) amount of data, and you also have a lot of rarely used but resident anonymous memory, then swap space should help, as that anonymous memory can get paged out in favor of disk cache, but I have no idea how common that is.
Yeah I mean, I know what you mean but this is where it gets into circular reasoning. I will always have operations groups move the workload to a node that has more memory if that is what is needed. In my case having swap on disk would require it to be encrypted due to contracts requiring any customer data touching a disk to be encrypted but I just avoid that all together and just add more memory. If 2TB or RAM isn't enough then they get 3TB and so on. We pushed vendors and OEM's to grow their motherboard capacity. At some point application groups just get more servers.
As has been mentioned a few times in other comments here, I don't believe that's correct. Swap space is not just for "using more memory than you have RAM".
I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.
For that reason, I always set up swap space.
Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.
Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.
The mentioned situation is not running out of memory, but being able to use memory more efficiently.
Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).
If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.
Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.
What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?
In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)
I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)
I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.
That only helps if you don't have much free RAM. If you've got more free RAM than you need cache (including disk cache), swap only slows things down. With RAM prices these days, getting enough RAM is not worth it to avoid swap. IME on a desktop with 128GiB of RAM & Zswap I've never hit the backing store but have gone over 64GiB a few times. I wouldn't want to have pay to rebuild my desktop these days, 128GiB of ECC RAM was pricey enough in 2023!
Was ist ever confirmed that it was in fact a laser? I wanted to make a trivia question out of this ProLok protection, because “lasers for copy protection” sounds just weird enough to potentially be a nonsense answer without context, but I couldn’t confirm that the holes were indeed made with lasers, and not with other means.
Good question. I don't know the answer, but I'm quite certain that it didn't really matter what mechanism was used to mark a diskette. Any damage would be equally strong as a way to detect copying.
I would guess (more or less) identically damaging multiple floppy disks in the same way would be easier with a laser than with something mechanical (e.g. a knife or a drill) (it is fairly easy to control power and duration of a burn), so it might well have been a laser.
On the other hand, disk tracks weren’t exactly tiny at that time in history.
It could be a tiny drop of something corrosive, but with that I’m also still wondering if a laser isn’t simpler, yeah.
I have almost no doubt that it could be a laser, it’s just unfortunate (and maybe a little bit suspicious) that I haven’t found it confirmed anyway. Almost like they wanted it to be a laser (hence the folklore around it), but had to use a less cool method to do it. But of course it might as well just have been a laser, and they for some reason declined to market or even just document it that way, for whatever reason.
And I’m not sure if that was still the case in 1997, but most likely changing the volume of digital sound meant the CPU having to process the samples in realtime. Now on one hand, that’s probably dwarfed by what the CPU had to do for decompressing the video. On the other hand, if you’re already starved for CPU time…