HN2new | past | comments | ask | show | jobs | submitlogin

If you can scale your system using only 0.5 GB per node, you get more cpu per dollar since the 5$ and 10$ levels both have 1 cpu. Higher levels seem to be multiples of the 10$ level. Does anyone have experience with this in a production system with a lot of users? Are there horizontally scalable database systems that work well on many nodes with only 512MB each?


Most DB systems will be tremendously helped by having far more RAM than that to cache, and also usually by having fewer nodes than you get when only having 20GB total disk space per node.

The main place where I could see this being appealing would be complex analytic jobs (since they're more CPU-intensive), but even so, if on 512MB nodes it has to spill to disk because there isn't enough RAM (or transmit a bunch of data across the network because it split the job over two nodes) and on a bigger machine it didn't need to, you probably would have been better off with the bigger machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: