Hacker News new | past | comments | ask | show | jobs | submit login

Not only GC parameter tuning, _all_ parameter tuning. In many cases, technologies have matured enough so that you can build a mental model of them that fits reality both now and in a couple of years time, but that is never guaranteed. For example, in the small:

- the performance of a bit shift used to be dependent on the number of bits shifted (https://wiki.neogeodev.org/index.php?title=68k_instructions_...). Timing of integer and floating point multiplication was dependent on argument values.

- on PowerPC, for at least some CPUs, it could be worthwhile to use floating point variables to iterate over an integer range because that kept the integer pipeline free for use doing the actual work. Change the CPU, and you have to change the type of your loop variable.

In the large:

- the optimal code for lots of data-crunching code is hugely dependent on cache size, sizes and relative speeds of caches, main memory, and disk.

- if you swap in another C library, performance (for example of scanf or transcendental functions) can be hugely different.

- upgrading the OS may significantly change the relative timings of thread and process creation, changing the best way to solve your problem from multi-process to multi-threaded or vice versa.

So yes, GC parameter tuning is a black art that, ideally, eventually should go away, but the difference with other technologies is only gradually, and, at least, you can tune it without recompiling your code.

I also think GC is getting mature enough for fairly reliable mental models to form. The throughput/latency tradeoff that this article mentions is an important aspect; minimizing total memory usage may be another.




Another example, the pile of switches usually available in C and C++ compilers.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: