What's wrong with using 32 MiB of memory? That's a relatively small amount, especially if you're trying to patch a 225 MB DLL.
And I would argue that the 32 MiB is worse than arbitrary - it is pointless. I have been thinking about this for a while and I cannot think of a situation where it makes things better. It wastes (a lot of) CPU, and I claim that it doesn't actually save memory compared to letting the balance-set manager do the trimming.
Low CPU priority and low IO priority are important for being a polite background process. Not using too much memory is important. But a working-set cap fails to achieve that last goal.
In what scenario would be 32 MiB working-set limit, enforced when pages are swapped in, be more effective at saving memory than a similar limit enforced by the once-per-second balance-set-manager?
Obviously trimming the working set at swap time does a better job of reducing the working set, but that's not what matters. What matters is saving memory. If you are trying to save memory then trimming once a second is just as good, and far more efficient, than trimming on every page fault.
But, trimming at all on a 64 GiB machine with 47 GiB free is just silly. It doesn't make sense to spend lots of an expensive resource (CPU time == electricity == battery life) in order to save a resource which you aren't even fully using.
I think what mattered to the people who designed this is not create memory pressure for foreground applications. Anyone who ever tried using a WinXP machine right after boot knows it would need a couple of minutes just to sort itself out from the many auto-starting processes waking up to big pile of new "background" work and fighting over the disk and memory, and I think that was what they were trying to repair with background mode. Windows has a long history of getting a bad reputation due to buggy 3rd party pre-installed software, so they built a rather strict performance isolation sandbox to address that.
Something that only runs once per second is not going to be able to keep up with with a process that is dirtying memory at full tilt, which is probably why they chose to enforce a hard limit by evicting the least-used pages to the standby list, from where one can usually get them back quickly without encountering a hard fault to disk (a soft fault was probably like 10,000x faster than a hard fault back when they did this work). They must have viewed thrashing as a pathological case that programmers would diagnose and repair, just like you have. After all they do provide some pretty good tools for diagnosing memory use.
You may argue that the 32MiB limits needs adjusting to follow Moore's law, but Moore's law stopped working for laptop DRAM sizes many years ago.
That all makes sense (although I think this feature appeared in Windows 7 rather than XP), and I certainly understand the value of a background mode.
But does the working-set cap _work_? In that scenario where you've got heavy memory pressure I still think it doesn't.
If the background process is typically touching less than 32 MiB in a second then a per-second trimming could reduce the working-set effectively.
If the background process is typically touching more than 32 MiB in a second then the fault-time trimming doesn't work because while it trims the memory from the working set, there is no time for the memory to be paged out, and if the memory did get paged out it would make it even worse because it would need to be paged in. So, CPU (and perhaps disk) overhead is increased, but memory pressure remains the same.
The problem (a process touching too much memory) is real. However the solution does not work. A working-set cap doesn't actually reduce memory pressure on foreground applications any more than a per-second trim.
Yes I meant that XP had the problem, and that Win7 tried to remedy it.
You are overlooking that the read-only part of the working set can be released immediately once another process needs those pages -- just zero them and add them to the free list. Only the dirty pages need to get written out to disk.
Anyway, it would actually be simple to test the effectiveness of the working set cap by writing a program that allocs and dirties memory aggressively, and then running it either in normal low priority or background modes, to see how it affects overall system behavior when running in each mode.
My test program already proves that if you alloc-and-dirty memory aggressively then in background mode you will consume much more CPU time. If your metric is "interfering with foreground processes" then this is fine, but I think the slowdown is severe enough to matter.
As for saving memory, I see your point, but I still don't see how the cap would be better than per-second trimming. If the clean-then-zeroed page is not touched again by the background process then either method would make it available. If the clean-then-zeroed page is touched again by the background process then the whole process of removing it from the working set, zeroing it, then reading it back in and faulting back in is a waste of time. So, again, I'm struggling to find a scenario where the cap is more effective than once-per-second trimming.
And I would argue that the 32 MiB is worse than arbitrary - it is pointless. I have been thinking about this for a while and I cannot think of a situation where it makes things better. It wastes (a lot of) CPU, and I claim that it doesn't actually save memory compared to letting the balance-set manager do the trimming.
Low CPU priority and low IO priority are important for being a polite background process. Not using too much memory is important. But a working-set cap fails to achieve that last goal.