HN2new | past | comments | ask | show | jobs | submitlogin

A few people are debating the numbers, but I think there's a much more important message here. Consumers rarely factor in the total cost of ownership (TCO) of a computer, and only look at the retail price.

I used to think this way, particularly because I didn't have much money, so I needed to buy the cheapest computer I could at a given time, resulting in paying more over the lifetime of the computer (which might not be that long).

It was only once I began working with data centers when I realized what a huge amount the power and cooling bills could be. In a datacenter, the power and cooling requirements of a high-end system can easily be more than the cost of buying the system initially. A great example is Western Digital's new "Green" 2TB drives. The drive costs more $/GB initially than two 1TB drives, even when you factor in the cost of housing the drives; but if you look at the power required by the hard drives (and hard drives really do pull a lot of energy) the 2TB drives can come out cheaper. The same analysis applies to SSD drives.

If you want to be smart with your money, and have more in the long run, don't forget to account for ALL the costs you'll incur when compared to another solution. This applies to just about everything, not just computers.



Saving on power in the datacenter is one of the worst ways to try to save money, in my experience.

It varies greatly, but $10 million dollars worth of servers, network gear, and storage might cost $25k/mo in power. Shaving 5% or 10% off that bill is practically meaningless. You can save a lot more money in other ways.

Obviously at massive scale (think Google) it's a different story, but for the average small-medium company it's not even worth thinking about until you've gone after everything else.


While I agree that for those numbers, 5% to 10% savings isn't worth the effort, this isn't a good reason to dismiss saving by reducing data center power. Nissan recently starting rolling out virtualization solutions for some of their datacenters, resulting in 34% energy savings. For the kind of numbers you list, that would end up being over $100k of savings a year.


I found the press release you were referencing: http://www.informationweek.com/news/software/server_virtuali...

It proves exactly my point: That focusing on power usage is backwards. You should worry about other stuff, like using less servers.

The guy that worries about using less servers can consolidate 159 servers to 28. He saves $5 million dollars and $10k/mo. The guy that worries about power usage itself just buys 159 slightly less power hungry servers. He saves $2k/mo.


Ah, now I understand the distinction you are making. And I agree; get fewer servers, not the same number of slightly more efficient ones, and then you can talk about saving money on power.


Saving on power in the datacenter is one of the worst ways to try to save money, in my experience.

You know what's a worse way to save money? Running out of power/cooling/floor loading and having nowhere to go...


Our colo was built back when computing per watt took up a lot more space... meaning we had two racks of machines in a giant empty room and we'd already blown our power budget.

Power is often the constraining factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: