HN2new | past | comments | ask | show | jobs | submitlogin

The only problem with PCIe is you cannot (generally) power them independently, that's why SATA is better.


One other consideration is I've found (at least on the Pi) CPU load is increased when performing heavy read/write activity. Certainly a little more than when I do the same thing through a SATA controller or testing through a Broadcom hardware RAID adapter.

And that CPU load gets a little into the worrisome territory when I put multiple NVMe in a RAID (which I'll be documenting soon!).


I'm glad to have you poking the ARM space because we're not getting the hardware we need for home hosting.

AWS/GCP really need that competition if we want some sort of distribution for the internet at large.

I'm still going with Raspberry 2 with 512GB/1TB SanDisc for my live database cluster (redundant not sharded!) as things are now.

But really the problem is watts per high quality GB of disk, as in rewriteable bits >1000 times:

SLC, MLC, TLC and now QLC (for those uninitiated: single, multi; double really, triple and quad; four layer); it's downhill from here on!


Is the CPU load attributable to some NVMe overhead or is it just that NVMe is faster than SATA and thus the CPU is being more efficiently utilized?


That's a good question, and something I haven't spent enough time benchmarking yet to give an intelligent answer to.


I wouldn't say that's a problem. I'd say it's a balancing act. For some purposes yes you want a separate power supply. But for others it's handy to not need a separate power supply.


True, I meant for server duty... I need to be clearer when I comment!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: