HN2new | past | comments | ask | show | jobs | submitlogin

[citation needed]? I've not seen any useful IO benchmarks of POWER in a long time, if ever, and they've never been remotely comparable to more commodity systems, since POWER almost always gets used in the huge systems you mention...


Citation Given: http://www.extremetech.com/computing/181102-ibm-power8-openp...

Power8 has 230GB/s of bandwidth to ram compared to a Xeon's 85GB/s. That's nearly triple (270.5%) a XEON's I/O speed.


ummm... I see STREAM copy benchmarks for Xeon reporting at least double the number you cite.

Further, a benchmark like this is complicated. How is RAM divided between sockets? What's the bandiwdth between a CPU and memory in another socket? etc etc


Unable to respond, citation not given.

But I'll respond anyways. It looks like you have GB and Gb per second confused. One is 8x the other.

admin-magazine: claims 120Gb/s [1]

intel claims: 246Gb/s [2]

Independent claims on intel forums range from 120-175Gb/s [3]

Intel's cut sheet for their own latest generation xeon states it only supports 25GB/s (200Gb/s) memory bandwidth [4]

[1] http://www.admin-magazine.com/HPC/Articles/Finding-Memory-Bo...

[2] http://www.intel.com/content/www/us/en/benchmarks/server/xeo...

[3] https://software.intel.com/en-us/forums/topic/383121

[4] http://ark.intel.com/products/75465/Intel-Xeon-Processor-E3-...


Reporting memory bandwidth in Gb is misleading.

The page you cite here: http://www.intel.com/content/www/us/en/benchmarks/server/xeo...

shows a triad bandwidth (STREAM) of 246,313.60 MB (megabytes) per second which is 240 GB (gigaBYTES) / sec.

If I'm making a mistake I'm sure we can work it out.


That 240GB is on 4 sockets, so corresponds roughly to the 85GB/socket cited above, while the Power7 compared was about half the Intel, so about 50GB/socket, while Power8 is allegedly at 240GB/socket.


Thanks for the clarification. HN took ~30-40 minutes before it would show a reply option for this post, so I went ahead and acknowledged the performance difference in a reply to my original reply (ugh). Anyway, that's great to see such high memory bandwidth per socket, rather than summed over the whole machine.


If you click the "link" button you can reply sooner...


[4] is not a valid link: the E3 is not the highest performing Intel chip wrt memory. The E3s don't have powerful memory controllers like the E7.


A reply to myself since I can't reply to the informative reply below (thanks HN). The RAM benchmarks for Xeon are per-machine (summed over sockets, presumably with no cross-socket traffic, since cross-socket memory is 1/2 speed) while the Power8 benchmarks are per socket.

That is indeed impressive. no wonder google is considering these. memory bandwidth is indeed critical.


[deleted]


Thanks, but I neither track my score nor care that you think I'm condescending.


The (now-deleted) comment you replied to was right. Your comment would have been much better without "ummm...".

All: please don't use snark in HN comments. It violates civility and degrades the discourse.


It's your opinion that "umm..." is snark. It's what I would say in person if I doubted your claims.

Notice that I actually had a question and it was answered in the thread, which I acknowledged.


Happy to take your word for it.

I could well be hyperallergic after years of wincing at gratuitous unpleasantness in HN comments (edit: and, to be fair, having contributed my own share of it). In person, of course, tone of voice would reveal everything about "ummm...".

Either way, please understand that the intent behind comments like this is in no way personal. The idea is to send feedback signals into the community about the kind of discourse we want to cultivate.


You're microoptimizing. Spending time to complain about an "um" or and "uh" which had no semantic content implied other than what um or uh normally means is pointless. You can't micromanage every conversation, nor should you. Focus on outright negative comments. mine was surprise and disbelief (which was later rectified by data). System working as expect, no SNAFUs!


If it was a snotty 'ummmm', I personally would object to it in real life, too. And I think others would as well. Also, you don't need 'ums' in writing.


Are you talking to yourself?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: