Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

Just read that Samsung changed the BOM on 970 evo plus. Switched to the controller of the 980 pro. Test reults are mixed. 4K QD1 is up ~50->82MB/s, serial writes after exhausting SLC cache (115GB) are nearly halved 1500->800MB/s.


That's not what happens when you switch to a newer, faster controller. Those performance changes are what happens when you switch to newer NAND flash that's manufactured with twice the per-die capacity, so you have half as many dies for a given drive capacity. (Assuming the newer NAND isn't split into twice as many planes per die to compensate.)

Edit: Looking into this further, TechPowerUp, Computerbase and others have mistaken K90UGY8J5B for K9DUGY8J5B. It's an easy mistake to make, and I've done exactly this before. But it completely explains the performance difference. The D that changed to a 0 signifies a switch from 16 dies per package down to 8 dies per package. The digits signifying the capacity of the package have stayed the same. The "B" at the end signifying the generation has also stayed the same, but Samsung didn't introduce 512Gbit TLC dies until a generation after they introduced 256Gbit TLC dies, so on the smaller dies the "B" means 92L and on the larger dies the "B" means 128L.


Thanks for the correction and the comprehensive information Billy, anandtech is my goto for hardware information, you guys rock. Forgot to mention that they changed the NAND.


> That's not what happens when you switch to a newer, faster controller.

Wasn't also caching strategy changed with the newer controller? Larger dynamic slc, slower folding speed?


I haven't done the math, but my gut feeling is that the relatively modest increase in SLC cache size could not on its own lead to that big a drop in post-cache sustained write speed except possibly where QLC is concerned, and especially not when mitigated by a generational increase in per-die performance.

Samsung's drives are still operating with SLC caches that are smaller than the theoretical limit, which means that even after the cache is full, the drive should be able to accept some amount of writes that bypass the cache and have near native TLC performance; flushing the cache can be deferred a bit longer until the drive is actually starting to run out of free blocks, as opposed to typical QLC drives with maximally-sized SLC caches that run out of free blocks at the same point where the cache runs out.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: