Maybe I'm confusing something, but to reach a trillion cycles in, say, a year, would take overwriting all your memory 30 times a millisecond. That doesn't sound right?
DDR ram is refreshed every 64ms (varies by DDR generation and specific chips). Branch Education has an excellent video on this named "How does computer memory work?"[1]. It would still take an exceedingly long time to reach a trillion, but it's still pretty frequent.
You don't need to refresh non-DRAM memories though.
I agree that some regions risk being R/W more than others, so memory controllers should indeed perform some kind of wear levelling, but otherwise I find it hard to imagine trillions of overwrites across GBs (or TBs) of data. 1e6 cycles is definitely doable, and on the low side, even for flash devices. 1e9 is pretty good for general-purpose memories, and few applications require 1e12. Not even SRAM or DRAM have unlimited endurance, due to physical wear. It's hard to find a source on this, but I would probably hand wave it at around 1e15 cycles for DRAM? This would be 30 years of operation for one access every microsecond.
Even at 1GHz, a trillion (10^12) writes is only 1000 seconds of work for a modern CPU. OK latency is a thing, so multiply by 10 and it takes a day. This is for DRAM where cells are individually addressed. For flash with wear levelling the numbers of course get bigger.
Volatile requires it emit instructions that access the object. So if the object is in RAMA, it will emit memory access instructions. However, on modern CPUs, that will still hit the cache. You need to either map in the memory as uncached, or flush the caches to force a memory access
no, that won't work. You'd have to clflush after every store. And even then, the cacheline might only ever get to the write pending queue (wpq) - and that you can't control.
I started to think about flipping a single bit in some process a million times per frame inside some loop, but that could only be done in cache…
Still if you only changed the state of the memory once per frame, you would do it in RAM, not in cache. At 1000 FPS (we should consider the worst scenario even if rare) that's 3 hours of playing a game to reach 10 800 000 reads/writes.
Now question is what happens if that bit gets damaged, perhaps the memory just disables it as damaged, and uses another bit for this memory address from now on. Perhaps it makes the ultra ram slower over time as more bits (sectors) get damaged?
The clock frequency is GHz, which is a trillion cycles per seconds. There is at least one cache layer between the CPU and the RAM but we are in the same ballpark. And yet it's OK for the typical lifetime of our computers.
Maybe I'm confusing something, but to reach a trillion cycles in, say, a year, would take overwriting all your memory 30 times a millisecond. That doesn't sound right?
Or is that trillions of any writes and erases?