HN2new | past | comments | ask | show | jobs | submitlogin

Trillions of cycles, what? How can that be?

Maybe I'm confusing something, but to reach a trillion cycles in, say, a year, would take overwriting all your memory 30 times a millisecond. That doesn't sound right?

Or is that trillions of any writes and erases?



DDR ram is refreshed every 64ms (varies by DDR generation and specific chips). Branch Education has an excellent video on this named "How does computer memory work?"[1]. It would still take an exceedingly long time to reach a trillion, but it's still pretty frequent.

[1](https://www.youtube.com/watch?v=7J7X7aZvMXQ)


1 trillion * 64ms is over 2000 years, I think it's unlikely that there's any DDR RAM that old.


From just refreshing. You generally do other stuff than refresh the ram too.


You don't need to refresh non-DRAM memories though.

I agree that some regions risk being R/W more than others, so memory controllers should indeed perform some kind of wear levelling, but otherwise I find it hard to imagine trillions of overwrites across GBs (or TBs) of data. 1e6 cycles is definitely doable, and on the low side, even for flash devices. 1e9 is pretty good for general-purpose memories, and few applications require 1e12. Not even SRAM or DRAM have unlimited endurance, due to physical wear. It's hard to find a source on this, but I would probably hand wave it at around 1e15 cycles for DRAM? This would be 30 years of operation for one access every microsecond.


Of course, but it's still a huge order of magnitude difference to get to trillions.


Like, say, serving a million MMO users?


In comparison, 10 million would mean less than a day if it was refreshed every 64ms. Even a billion would mean only 2 years worth of cycles.

I think a trillion, or at very least 10s-100s billion is the correct amount of cycles for RAM.


Persistent memory doesn't need to be refreshed though so that's irrelevant.


This type of memory wouldn't need refresh so you can cut out all of those writes.


If you have to design for pathological workloads, absolutely you can write to a location in main memory 30 times per millisecond.

Lots of non-pathological workloads might write to a memory location every millisecond, such as a game with a 4-pass renderer running at 240Hz.


Even at 1GHz, a trillion (10^12) writes is only 1000 seconds of work for a modern CPU. OK latency is a thing, so multiply by 10 and it takes a day. This is for DRAM where cells are individually addressed. For flash with wear levelling the numbers of course get bigger.


In practice a memory location being written to that heavily will never escape the cache unless you are doing something exceptionally weird.


doesn't C have keywords like volatile to insist reading from RAM?


Volatile requires it emit instructions that access the object. So if the object is in RAMA, it will emit memory access instructions. However, on modern CPUs, that will still hit the cache. You need to either map in the memory as uncached, or flush the caches to force a memory access


no, that won't work. You'd have to clflush after every store. And even then, the cacheline might only ever get to the write pending queue (wpq) - and that you can't control.


I would seriously doubt there’s many instances of writing to a single volatile memory location at 1ghz (excluding benchmarks).


Modern DRAM doesn't address individual cells. For both DDR4 and DDR5 the minimum burst length is 64 bytes, the width of a cache line of most CPUs.


I started to think about flipping a single bit in some process a million times per frame inside some loop, but that could only be done in cache…

Still if you only changed the state of the memory once per frame, you would do it in RAM, not in cache. At 1000 FPS (we should consider the worst scenario even if rare) that's 3 hours of playing a game to reach 10 800 000 reads/writes.

Now question is what happens if that bit gets damaged, perhaps the memory just disables it as damaged, and uses another bit for this memory address from now on. Perhaps it makes the ultra ram slower over time as more bits (sectors) get damaged?


Lifetime of microeletronics is often quoted at around 30 years. So that's once a millisecond. For a refresh cycle that does not seem extraordinary.


I was thinking of overwriting just a few words of memory over and over again, which DRAM can endure for decades.


The clock frequency is GHz, which is a trillion cycles per seconds. There is at least one cache layer between the CPU and the RAM but we are in the same ballpark. And yet it's OK for the typical lifetime of our computers.


GHz is Billion, not Trillion


(side note)

Until recently a billion was a trillion, or vice versa, depending on whether you're from the UK or the US.

A GHz is a GHz no matter where you are. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: