HN2new | past | comments | ask | show | jobs | submitlogin

Based on this, the default value is 2MB: https://unix.stackexchange.com/questions/127602/default-stac...

So that would mean a lot of memory for 1 million threads, 2TB of RAM. But you can change the default. With a 64k stack you'd use up ~68GB of RAM, which doesn't seem like a lot for 1 million threads and 1 million requests happening at the same time.



Also worth noting that the entire stack isn't allocated at once so 1 million threads would be using 2TB/68GB of virtual address space, not 2TB/68GB of physical memory.


That is indeed a very important fact to keep in mind! Thread stack sizes have been a problem with 32 bit systems where you quickly run out of virtual memory because the adress space is not large enough. With 64 bit that is not a problem anymore.


That is also the maximum stack size which shouldn't normally be reached. You'll have to be careful how you use memory when you're handling a million clients, either as async or threaded.


The point is that each stack need to be big enough for the worst case. That means it does not really scales to start many thousands of threads. While the futures themselves used in async code can be kept relatively small, as they only need to contain the state needed while awaiting.


In theory a smart enough OS (or runtime) should be able to reclaim any memory beyond the stack pointer (plus redzone) at any time without preserving its content and shrink back the stack. Because of signal handlers that memory is to be considered volatile anyway.

It might not be worth doing it in practice, but it is something to keep on mind.


What "normal-sized Linux server" has 70GB of RAM?


One that wants to handle a million requests per second?

Or would you want to do that with a Raspberry Pi? :-)

> What "normal-sized Linux server" has 70GB of RAM?

Also, why are you "quoting" what I did not say?


I was quoting the parent comment by @akvadrako :)


Air quotes.


Most servers support at least 128GB; it isn't even very expensive. And if you want to handle a million concurrent users you also need to consider CPU and latency, so for most real-world workloads the memory probably won't even be your bottleneck.


How much does a 68GB cost on the cloud per day? Also, you don't have 1 million cores so quite a bit of your daily server costs will be eaten up by the OS running context switching code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: