HN2new | past | comments | ask | show | jobs | submitlogin
M.2 on a Raspberry Pi – The Tofu CM4 Carrier Board (jeffgeerling.com)
294 points by geerlingguy on March 12, 2021 | hide | past | favorite | 201 comments


"Outside of commercial use, the $100 dollar price tag is a little high for the 'impulse buy then stick it in a drawer' use case most of us using Pis are familiar with."

I swear, some day, maybe even soon, I will find a use for all the Raspberry Pis, various Pi POE adapters, Pi cameras, and sensors I own.


I suggest you don't develop an interest in esp32, or like me you may add an equivalent pile of those devices to your gaggle of interesting but unused hardware.

At least it's much cheaper that pi!


I've really gone down that rabbit hole recently, and I get it, but I don't regret it. There's a whole world of electronics that I've been missing. As wonderful as Linux is, at the end of the day, once you have Linux running on a device, it becomes a bit generic. The Raspberry Pi is useful and fun, but they are sort of interchangeable with any other Linux computer. These gumstick-sized microcontrollers bring back some of the joy I had with computing in years past.


Linux is indeed wonderful, but there's a lot to be said for dealing with things at a simple level when your goal is to e.g. measure voltages, turn LEDs and relays on and off etc.

I still find it incredible that a fairly fast, dual core computer with a color display, wifi, bluetooth, USB, battery charging and all the memory you need costs $8! [0]

[0] https://www.aliexpress.com/item/4001235066814.html


That's really an incredible price for a computer with a color display. I enjoy and use Linux, but I think it's so versatile and general purpose my inclination is to put it online, run Firefox, play videos or whatever, and then it's magically just another PC. It's kind of like the old saw about every program expanding until it can do email. I'm starting to see the small devices, like esp32 inherit some of the aura of the old home computers which ran BASIC from ROM.


Not sure if you were referencing gumstix there, but I remember fantasizing about getting one of the tiny SBCs over a decade ago but they were too expensive.

https://en.wikipedia.org/wiki/Gumstix#/media/File:Gumstix_oc...

A decade later we have ESP32 and RPi Zero. And it looks like gumstix have read the writing on the wall and now make RPi accessories.

https://www.gumstix.com/


I don't think I was consciously referencing Gumstix, but that must be where I first heard the term. I remember being interested in those at the time, but like you I thought they were too much.


From ESP there’s downsizing to 555timer chips

https://howtomechatronics.com/how-it-works/electronics/555-t...


microcontrollers are cheaper than 555s these days


The thing that I want in the future is things at the ESP32 price point that can actually do interesting work - i.e. 1GSPS ADC and an FPGA on a board for 20 bucks let's say.

I suspect moore's law may have died too early for it, but I can dream...


Don’t forget the need compare different esp32s or esp8266s.


I have redundant pi holes and NTP servers running. One of them is GPS/PPS trained with chrony for some truly accurate network time.

I have another pi with a SCART hat and 180 GB of ROMs driving my BVM.

I have a 0w modulating its clock with Daftendirekt at 7am every morning for a pirate FM radio alarm clock.

If I had more impulse purchasing power I’m sure I would like to experiment with octoprint on my CR10S-Pro.

There are lots of things you can do with Pis. My use cases are far from the most fun or clever. You just have match them to your interests :)


To be fair, I have 7 Pis running on my network. Home Assistant, 2x Pi-Holes (for redundancy), 1 magic mirror, my Weasleyclock, 1x w/ a hi-fi berry acting as a Spotify streaming device, and a garage door sensor. So roughly half the pis are in service. Maybe only 5% of all the sensors and projects are.


I have a rpi 2b with a hifi berry DAC hat too. It works great. It is attached to a timecapsule with 20cm ethernet and usb cables, for data, network and power. Nothing important on the sd card.


How do you set up the pi holes for redundancy? Just manually configured as primary and secondary DNS, or some nifty configuration syncing?


Both. Have your router (optionally all clients) point to each DNS server. You can go further by keeping the pi holes synchronized:

https://github.com/vmstan/gravity-sync

Since some hostile clients (such as TVs) have hard coded DNS, it is necessary to forward all port 53 and 853 traffic to a pi hole. This is easy enough with NAT redirection rules in the router, even with two pis.

https://www.myhelpfulguides.com/2018/07/30/redirect-hard-cod...


Not your parent poster, but I manually configured them as primary and secondary DNS servers (dockerized PiHole), and then just used pi-hole's "export configuration / restore configuration" tool to keep the DNSs mostly in sync.

I heard there's a tool called Gravity Sync that will sync them, but I have not tried it yet.

Techno Tim on the topic https://www.youtube.com/watch?v=IFVYe3riDRA

https://github.com/vmstan/gravity-sync


I just advertise both Pi-Holes via DHCP and when I configure manual DNS settings. The primary DNS gets the vast majority of the DNS traffic, about 150 requests/minute and the secondary gets about 10 requests/minute (mostly from a single device).

Both piholes are on different UPS power, different switches, in different locations. In theory the lights will go out on the main UPS and switch first (it draws more power), but this configuration did save me once.


My guess would be for guest/home use or maybe for different devices/profiles.


The typical reason for running redundant pi holes is high availability. If you really want all DNS traffic to go through your filter then your internet ceases working as long as your DNS server stops running.


If you have an extra Pi Zero W, Klipper has lower system requirements than Octoprint and runs better if you don't need any of the plugins that Octoprint offers, or a USB webcam hooked up.

Though it will involve flashing Klipper over Marlin on the CR10. That just means more stuff to tinker with!


Daftendirekt - the Daft Punk track? By "modulating its clock" do you mean playing the song as an alarm?


Sorry, I meant "Homework" as in the entire album.

By "modulating the clock" I mean modulating the SoC base clock frequency (nominally 100 MHz, but increased to be a valid and unused FM channel, if decreased then the pi locks up) with the audio to make an FM transmitter. Hook up a wire to the clock output GPIO pin and you have an FM antenna. Lay it next to an old FM clock radio and you now have a clock radio that plays whatever you want. Make a cron job and forget it.

https://github.com/pimylifeup/fm_transmitter


That is pretty cool. I'm actually working on a Raspberry Pi based clock and now I want to incorporate this somehow. Thanks for sharing.


I’m certain it plays hell on software timekeeping. If you don’t have an RTC then switching to chrony and increasing the poll rate should help prevent potential clock error.


The tipping point for me was setting up the infrastructure to network boot my Pis. Now it's trivial for me to plug in a Pi anywhere on the LAN, and even switch what OS it is booting into just by changing a symlink on my fileserver. It's now very easy to experiment with different things, and not worry about the cards dying.


This has been on my TODO list for a while. But first I need to set up another Pi to be the PXE server... :)


It doesn't need to be another Pi, of course.


Recommended howto?


Googling Raspberry Pi Network Boot will return the official documentation - it's a built in feature of the Pi boot rom.


I think the answer for those projects is probably a netboot diskless pi and have a server that handles the PXE boot + disk.

Folks here have done it, then all your pis are online easily, especially with poe.

I think it would be great for camera stuff - it's like VMs turned inside out where your server runs lots of remote physical machines.


Hah! As I look down near my feet there is a clear plastic bin full of pi's, usb cables, micro hdmi, a picam, a go pro, all kinds of crap I swear I will use and never do.

I do have a pihole running on a pi3 that has been serving dns for my home network for about two years now without so much as a hiccup.


I usually use them as local dev env. I deploy everything to my local cluster before I try to run it on a real infra. It is pretty good. I can also virtualize some of the workload using Firecracker.

https://dev.l1x.be/posts/2020/11/22/getting-started-with-fir...


This seems like a great use case. I have a refurbished 1U server I use, but the fans on that thing are sooo loud.


There's a price point here that makes sense, and it isn't $100 + the Pi + the M2 SSD. Anything more than this and you get into the low-end laptop price range with full x64 compatibility and a Windows license.


Except by that point you're dealing with a full laptop with proprietary drivers and, well, Windows itself.

There's a reason a Pi took off as the SBC of choice: How open it is and what it still can do without adding things to it.

If you're only looking at price tag when it comes to adding an SSD and a hat to compute module, the compute module concept was never for you in the first place.


I have three Raspberry Pi computers. The third, a Pi Zero came in the mail today. So I'm sold on the strong points of the system.

My point wasn't about the laptop having Windows, just that there was a "Windows tax" on the inexpensive laptop and it still offered a better price / performance value. I bought a laptop a couple of years ago for $250. It has a display, two kinds of USB ports, HDMI, Ethernet, a keyboard, trackpad and a battery. I'm sure I could run Linux on it if I needed to. It's not fast, but it's faster than any tricked out Raspberry Pi.

If we envision the Raspberry Pi as a learning tool, it's almost too good. It's really too powerful and too complex. As a general purpose computer it still falls short of what I consider a good use of my free time. Don't get me wrong, it's fantastic that there is such a wide market for add-ons for the Pi, for an aftermarket of small players with neat ideas. I have to pry myself away from the sites where they sell this stuff. But this project is just a little too expensive to make it worthwhile for me. I'd like to see a Pi+M2 kit for $100, which likely there will be in time.


For lots of low income people Linux on ARM is a down grade and the stuff that only "desperately poor" people would even consider. Having a "proper" laptop with a "proper" OS will decrease the perceived difference between yourself and other non poor people.


I’m experimenting with self-hosting platforms, like sandstorm.io. (It is not currently available on an rpi, though there is a lot of interest within the community)

Right now though, going to setup pi-hole so my daughter can stay on task during remote learning, and if we ever get a smart TV, pi-hole the adtech dns.


This comment, and the general response to it, made me feel slightly less ashamed. Thanks for that! =)


I recently had a need for one (installation of Octoprint), but then realized the one I had in a drawer was too old to work. So I ended up having to purchase a new one anyways.


Just put LibreElec on the Raspberry Pi 4 and you instantly have a cheap media player capable of running HEVC video.


I use my pis to run an air quality sensor. It runs continuously.


Develop a remote temperature monitor (freeze alarm).


Apparently I was incorrect in stating that NVMe boot is not yet supported—this week the Pi Foundation added a new docs page[1] explaining the (in beta) process for enabling NVMe boot on the Compute Module 4, which currently requires updating the firmware _and_ bootloader.

Not something to rely on in production yet, and right now it doesn't work with NVMe drives behind a PCIe switch, so this limits some of the utility, but it is good to know it's being worked on!

[1] https://www.raspberrypi.org/documentation/hardware/raspberry...


If rpis had m2 built into the card it would be the premier ARM platform right now. Its so fast many people could use as their main desktop.


I reading an article recently that if Microsoft really wanted to make Windows on ARM64 take off they should sell a "developer machine" like a Raspberry Pi 4 with Windows ARM64 preinstalled for $199. If such a machine had an M.2 drive as an option (microSD or eMMC default) I suspect it would be a huge hit.


They don't sell them directly, but Microsoft maintains a list of boards you should buy (probably as close to "Windows Certified" as you'll get for small batch boards).

https://docs.microsoft.com/en-us/windows/iot-core/tutorials/...


Does IoT Core run a full desktop?


Well, Windows 10 IoT will run on a pi 3, but I don't recommend it. It works well enough, but it seems to have been abandoned by Microsoft, so it's doubtful there will be any updates.


Seems like a great way to attract people to linux. “Hey I know windows so I can get a raspberry pi and try that.” Then “wow this doesn’t run very well. I guess I should try raspbian...”


Wait, a 4 times markup for having Windows installed? That doesn't compute.


Really hoping that the RPi 5 has this. Combined with presumably improved overall performance, it would be a game changer. I don't think it makes sense to fork a 'pro model'. Probably cheaper overall to just add it the regular model.


That and preferably default UEFI; you can already use a third-party version on whatever your boot device is, but sticking in a tiny bit of on-board storage and flashing UEFI firmware at the factory would let us finally not need pi-specific images


> flashing UEFI firmware at the factory

The factory can already flash the pi firmware. What does UEFI solve here?


It means that I can download a generic aarch64 OS image instead of needing the special pi image that knows how to work with its nonstandard boot sequence.


Presumably it provides a standardized boot process for other non-RPi specific ARM OSs


They are the premier ARM platform anyway, but the real problem is that a 5W CPU is too slow to work with, unless you write everything minimalistic from scratch!


It really depends what you're doing. The 5W CPU in the Pi is more than adequate for a lot of tasks. Heck, I wrote papers using Libreoffice on the original model Pi.

The Pi4 is powerful enough to run web browsers comfortably, which was a problem on the original Pi. Office programs run no sweat. You can even do light development work.


Yes I have switched to using a Pi4 8GB as a desktop machine running WindowMaker and it's speedy as can be. Am using a USB3 SSD but booting from /boot on the SD card, which then loads the main system from the SSD. Can happily do C++ work on it or RDP to a Windows machine.


Millions of smartphones running on ARM would disagree.


I meant work as in compiling work, when you are used to 65W X86 the ARMs stand no chance. My 3D MMO engine compiles in 2 seconds on my Windows PC but it takes 10 seconds on Raspberry/Jetson, enough that I decided to build a 45W Atom computer to try the inbetween... with 25W CPU and 20W GPU!

Even if that would mean I have to "port" the engine to linux/X86. Until now I just had windows/X86 and linux/ARM!


> when you are used to 65W X86 the ARM stand no chance.

My m1 mac says hi! :)


Yes, it says hi and then when you want to upgrade the disk/RAM you can't. And when it breaks you can't fix it and Apple sells you another one for bazillion dollars. And you have to use Metal instead of OpenGL. That's when I say goodbye until they reverse engineer the GPU and port OpenGL to it! :D

Now I didn't even mention the App store tie in and how you need to sign the binaries on Apple hardware and how they poach 30%, but you get the picture.


You just moved the goalposts to another planet. Your initial comment was about wattage and ARM performance.


Not to mention that plenty of x86 hardware has also ditched socketed components.


> And when it breaks you can't fix it and Apple sells you another one for bazillion dollars.

I’f you’re lucky enough to live in a country with consumer protection laws (NZ is one), Apple will fix if for you for 3-4 years. So you pay the fortune you mention and then a bit more to cover this feature.


None of that is even remotely related to your initial point.


OpenGL is a legacy API. Metal, Vulkan and Direct3D 12 are the APIs that can actually saturate modern hardware.


Really those APIs bring nothing useful to the table.

OpenGL (ES) 3 with VAO was the last feature that really moved any interesting goalpost (to reuse that phrasing since we have a tree structured comment field to move those around) for games.

Most new protocols are just job security.

Edit: Also calling something that just released in hardware (OpenGL ES 3) legacy is kinda weird.

Maybe wait a decade or two?


Performance is very useful. OpenGL’s design precludes it.


Now I can reply to your comment (what is going on hn, why is there a timer on the reply button the deeper you get?)

Performance with the low level APIs are not something that competes with OpenGLs higher level API, they are complementary; eventually there will be OpenGL drivers written in Vulkan!

That said, I have yet to meet anyone that can give me a practial example that affects gameplay where Vulkan (or Metal/DX12) makes a difference!

Also last but not least fragmentation is bad, I rather support OpenGL (ES) 3 on all platforms than port to those 3 new APIs.


Timing is what is going on. If you want to reply directly, click the timestamp.

edit: the timer is there probably for flamewar prevention. Maybe for general discussion quality too.


Sure, OpenGL can be implemented on top of the native efficient APIs.

The biggest problem with OpenGL is the state. It severely affects CPU usage and limits parallelism options. The second problem is poor handling of buffers.

It hardly matters, anyway. Vulkan, D3D 12 and Metal are very similar and almost always used indirectly through a game engine that can take advantage of them.


I'm writing my own 3D MMO engine from scratch, so it does matter to me: http://talk.binarytask.com/task?id=5959519327505901449

Unity and Unreal are so bad. I mean compile for 30 minutes? To use them is to have technical debt way beyond what is healthy.

So why can't you make OpenGL stateless? It seems to me multithreading is really bad on the new APIs too, you'll have to trash memory with a bunch of buffers that then gets submitted by one thread causing motion to photon latency!

RdR2 had like 10 frames of lag! Completely unplayable.

I prefer direct/forward rendering one one thread and make the graphics simpler instead. Graphical fidelity is not important to make a fun game, just look at Nintendo.


If your graphics are atypically simple, it hardly matters.

For the vast majority of games that compete on graphics, you need to use modern APIs to take advantage of modern GPUs.


Sure, but I also out-scale all other engines both server and client:

http://talk.binarytask.com/task?id=579711216639635462

My server is the first MMO system to handle 10.000 concurrent action players on one machine: http://fuse.rupy.se/about.html

You have to pick your battles.


10 seconds? What disk were you using?

How will you ever get those 8 seconds back??


It's almost never a IO bottleneck, the amount of code is too small for it to be IO.

8 seconds x 1 million compiles = 92 days!

When you code action games you want to iterate quickly to make the controls and physics as good as possible.

I made a .dll/.so hot-deploy system to be able to work effectively, so I can hot-deploy the game in <1 second on the Raspberry but then on the PC I do it in 100 microseconds!


I think that kinda proves the parent comments point, since all the apps were re-written for mobile.


No, not everything requires a complete rewrite. The browsers for example benefit from code reuse. The UI is another matter. Wrapping and rewriting is a given on the system side, though.


are you saying phones are running general purpose OS and apps?


yes. Browser, youtube, video calls, docs, games, there isn't much they can't do.


They can’t be used for local development, so hard to consider them a general computing platform. They are more akin to consoles than PC.


Thonny runs well on my Pinebook Pro, so in that case comparably wimpy ARM based Linux machines can be used for things like Python development even with an IDE.

It just depends on how heavy-duty what you're trying to do is. Scripting in general runs fine on my ARM SBCs. The real trouble is if you've got something that you need to recompile every time you make a change and you don't have a cache set up.


If I wanted to programm I could get a shell on a server for example.

Now touch screen keyboards are a different problem.


The tooling isn't particularly nice, but on Android it is clearly possible.


When I see truth being downvoted I try to upvote, hn would be a better place if people learned to upvote, the high karma downers are really toxic.

You need a keyboard and mouse to be productive, case closed!

Edit: see the toxic high karma downers even got to this one!


Meta commenting on comment voting is against the HN guidelines and tends to get downvoted. Please review them again.


Jesus christ this place is toxic by design.

I bet you the high karma people click on your profile to see how much karma you have before they push you down.

Eventually the HN database will leak, it's inevitable, I'll make sure to grab a copy then.



Abstractly speaking, yes. I notice the limitations on mobile of course, but you cannot deny the potential.

I'm barking up the wrong tree, am I? sorry for the misunderstanding


It’s got plenty of speed to drive my autonomous GPS guided solar powered open source farm robot!

I guess I did write the code from scratch, but it’s just Python, nothing fancy.

https://community.twistedfields.com/t/introducing-acorn-a-pr...


You should be able to drive that with a Zero no?

Where is the battery? Lead-acid or Lithium?

What does this even do, I see no arms, cutters or anything?


It could be done with a zero with C++ I would imagine. IIRC the zero is 700mhz single core. I’m using a Pi4 which is quad core 2.XGhz. Our current Python code uses 20-50% (I forget exactly) of each core. I’ve not made any attempt to optimize, and most of that is probably object serialization for IPC which could be improved (for eg with shared memory).

No battery. It runs directly off of solar power with a supercapacitor bank to provide peak currents. Power draw while driving walking speed on level ground is 20-30 watts with the latest drive system (not shown in the video). Solar panels are 8 100 watt panels.

The project is a work in progress. We just open sourced everything and wanted to start building a community of interested followers who would like to see our progress and share ideas. I do mention this in the blog post but I realized afterwards that understandably a lot of people just watch the video and don’t get the full picture. But we’re adding a vision system now and tools will come in the future. We’re solidifying the base vehicle right now which is what you saw in the video.

Feel free to ask me here or join our community at the first link I shared if you have any more questions!


No the 4 is 1.5GHz, it would melt at 2+GHz!

If you code Python then you really have missunderstood what tools are available.

My recommendations: C or Java.

You need a battery if you want to be able to predict work, we had a whole month of cloudy weather last year!

I missed the video, I'll check it now.


Whats this obsession with ysing raspberri pi for work? It makes no sense, PI doesnt ecen have proper cooling. Get an Intel Nuc if you want a miniature PC, at least it has all the ports, disks, etc.


That’s a vastly more powerful machine, 8-10x the size when counting the brick, and way less fun to hack about with. I love my Nucs but the Pi is better.


The nuc is better when you want to use a PC for real work. The pi is better for when you want to drive random sensors and displays as well has general tinkering. I'd hardly call one better than the other.


The NUC is literally 5x the price of a Raspberry Pi.


Hm... A Celeron NUC without memory and disk starts at around 130$, IIRC.


Which is about 5x the price of raspberry pi. I get your point though, I actually have a pentium nuc.


But you also need storage / Sdcard, mouse, keyboard, monitor, etc.

Im embedded/Iot none of that matters, but if we are talking about work, using PI makes no sence when compared to even an old laptop form ebay.


For the pi you need these heatsinks: http://move.rupy.se/file/pi_4.jpg

The NUC is an ok alternative with the (for modern unavailable) Streacom NCX cases.

I'll use the Raspberry when the KWh is $1, until them I'm trying to move to Atom.


Aluminum heat-sink cases work well too.


Yes if they look like the Streacom NC1/NC2... but all the cases for the Raspberry have bad design.


OK, I'll bite. How do, "all the cases for the Raspberry have bad design"?

I have one of these Flirc cases: https://www.adafruit.com/product/4553. It looks fairly nice, acts as a heat sink, so my Pi 4 chugs along fine without having to throttle even when using all cores.


I'm talking subjective aestetic design.

Look here to understand: https://streacom.com/wp-content/uploads/nc1-wy-025-000-b.jpg

THAT is good design, comparatively everything released for the Raspberry looks like turds.

The Flirc in particular has 2 major flaws: It has a plastic lid on the top?! And it has the logo, things that have to put their name on their product always have bad design. The design should be the logo!


Agree. The main issue with using rpi as a desktop is the slow disk i/o. M2 on the board is what RPi 5 needs.


Raspi really needs to come out with a 'pro' model. I guess they kind of did with the compute module but raspi's are used a lot professionally in industry, probably more than they are used as a tool to get kids and developing nations into computer science.


Raspi doesn't "need" to do this at all.

The main goal of their foundation is to support education, not to satisfy a "pro" market.

Just because people use them as tools doesn't mean they change the charter of their non-profit foundation

https://en.wikipedia.org/wiki/Raspberry_Pi_Foundation

I'd say the other way around: the "pro" people should go find another board to use, of which there are many.


Just noting that according to the Pi Foundation/Pi Trading, they do sell 44% of their boards to industrial customers[1], so education is definitely a primary goal, but they do have a much broader audience, and I'm sure that influences some of the decisions about what hardware and price points to target.

[1] https://www.raspberrypi.org/for-industry/


That figure is amazing.

I interviewed in person at a leading industrial automation company over five years ago. They were looking for someone to work on ARM stuff and I was (naively) proud of the work I'd done with the original Raspberry Pi in school. The hiring manager had never even heard of the Pi...


Things have changed a lot since then. One of the well-known names in industrial automation, Opto-22, now has adapters to connect their hardware to raspberry pi. And they're not the only one.

http://developer.opto22.com/pi/


They don’t need to, but the desire to see it is understandable. Raspi is the de facto ARM linux box. It’s cheap, it’s everywhere, it’s performant. If you have a niche interest then the platform that it’s been developed and tested on is raspi. That’s far more valuable than a $10 savings on a $30 computer.


I don't understand this divide. Why people shouldn't have access to "pro" level tools for education? I had plenty of instances in my life when I tried to do something with consumer tools only to be disappointed and giving up, but later on re-discovering the topic with better tools that lead me to some great results. Cost of low end nvme drives is getting closer to more expensive SD cards now (which are not meant to be used as a hard drive). I see no reason why RPi shouldn't release an update with nvme slot and even dropping SDCard slot altogether.


We used terrible and old tools at uni, so the value of the degree is questionable


The "pro" models exist - they're just not Pi's.

Industrial IoT does use Pi's a fair bit, but this is IMO a misuse (I'm guilty of it too) and in the long term can be a disservice. The Pi is a great platform for a go-to-market product, but they have many shortcomings that for most use-cases in industry are showstoppers (most notably, Pi's will often fail without warning, or their "disk" will become corrupt). More expensive industrial products are out there and typically are more robust, and designed in a way that they can be more seamlessly integrated to a product - once the upfront costs are paid (both in development and dollars).

What really needs to happen is other manufacturers making the development on their hardware more straightforwards. What the Pi foundation has done is shown that companies and developers want to just be able to run a widely available and minimal linux distribution and not have to muck about with proprietary and poorly documented SDK's behind closed doors.


> Pi's will often fail without warning, or their "disk" will become corrupt

You get what you pay for. But this is beating a tired horse that I'm skeptical was ever true to begin with. Are the people with SD card corruption just using crap cards to begin with? I've used a pi4 and rock64 and neither had any disk corruption. Plus you can boot on USB today. And fail without warning? Is that really a thing? No sensor saying there is a heat problem?

You can definitely reach for an industrial product with similar specs. But just keep in mind that niche products may have even less market testing than a Pi. There is some truly crap hardware and software out there that people suffer with simply because it's the only player in the niche.


Indeed - you do get what you pay for. I've personally had one 2nd gen rpi fail without warning and have also had several instances of SD card corruption.

The SD card is probably crap like you say but it can be formatted and passed read/write verification testing when connected to a PC which is a bit weird.


rock64 has eMMC ;-)


PIs are used in industry when possible because so easy because unofficial documentation and community is so large to the point where practically every use case has already been documented.

It's much easier for Raspi foundation to release a more industrialized version than it is for another manufacturer to create this support and community. They could still operate on their non-profit mission with the profit.

Industrial microcontroller and embedded companies do need to also make their software and docs better. There seems to be some modernization but it's slow.


The "pro" models exist - they're just not Pi's.

Ssssh. That's blasphemy here.

Any other ARM platform mentioned here gets downvoted and shouted down with "But it's not $40!". Let them play with their toys and leave them alone.


I'm curious about this as someone who keeps eyeing the Pi's for a solution to a few problems customers have brought up. It looks like there is a pretty big cost jump to get something equivalent, but I may just not know where to look.


There are places outside HN where ARM-based product developers are happy to talk about this stuff and share their knowledge.


it would still depend on the firmware. people forget that Raspberry Pi is controlled entirely by the proprietary ThreadX RTOS which acts as a hypervisor and allows Linux to run as a virtual "guest" on the pi.

https://en.wikipedia.org/wiki/ThreadX

and for the uninitiated, ThreadX RTOS is not the same as Intel's ME. ThreadX is a critical dependency to run any OS on PI. without Intel ME, you can still run an OS.

this virtualization model is still largely why Pi performance is complete garbage compared to pine64, banana pi, beagle, and others.


ThreadX does not do what you think it does. On the Pi it acts as the first stage bootloader, it is not used as a hypervisor.

What you've written above is not even remotely true. You've been corrected on this before, you ought to stop spreading kooky misinformation.


I thought ThreadX only ran on the VideoCore, basically to bootstrap Linux on the ARM processer. The only performance that might affect is elapsed time to boot.

That doesn't sound like virtualization to me. Do have more info?


I think you're mistaken about several pieces.

First off ThreadX on the video core does not run as a hypervisor.

It also does not impede perf on the ARM core.

Additionally, yes, Intel ME is required to run anything on Intel cores. Without it the main cores don't start.


ME isnt required at all...

https://www.csoonline.com/article/3220476/researchers-say-no...

To explain this again, it technically is a guest. The GPU cores run a real time operating system called ThreadX. This operating system is closed source and rules the system without the open source Linux Kernel being aware of it.

When the Raspberry Pi starts booting the CPU is completely disconnected (technically in reset state) and the GPU is the one that starts the system. You can have a look at the `/boot` folder and you will see some of the binary blobs used by the GPU to both start the CPU and run its own ThreadX OS (bootcode.bin and start.elf). You can learn more details about the boot process here.

After the GPU has the CPU load the Linux Kernel, it doesn’t just stay there waiting to act as a graphics-processing-unit. The GPU is still in charge.

IME is a different beast entirely.


"ME isn't required at all..."

The document you linked refers to a more technical one that explains they don't disable the ME until after the main OS is booted, because...

"The disappointing fact is that on modern computers, it is impossible to completely disable ME. This is primarily due to the fact that this technology is responsible for initialization, power management, and launch of the main processor."

http://blog.ptsecurity.com/2017/08/disabling-intel-me.html

So, it is serving the same role as the Rpi VideoCore.


ME is still required; that strap option that the article describes just disables the stuff like it's network stack and remote admin facilities after it has started the Intel cores. It is still required to be on for power management for instance.

Also, interestingly you can swap out the terms on your paragraph and it's still true

> When an Intel CPU starts booting the CPU is completely disconnected (technically in reset state) and the ME is the one that starts the system. You can have a look at the boot flash partitions and you will see some of the binary blobs used by the ME to both start the CPU and run its own ThreadX OS.

Although albeit, on newer MEs they switched from ThreadX to Minix. The ME is very very very similar.

"Technically" it's not a guest relationship, the ARM cores start in EL3 (above hypervisor mode in secure mode).

And none of this show any perf costs to having a management core (except maybe some minimal bandwidth pressure?)


> ThreadX ... rules the system

> The GPU is still in charge

Can you explain in specific, concrete terms what this means?

For example: I've read that the undervoltage monitoring happens on the ThreadX process, and that it throttles the ARM cores when low voltage is detected. The vcgencmd has more info on this, including the following bit field returned by vcgencmd get_throttled:

    Bit Hex value   Meaning
    0          1    Under-voltage detected
    1          2    Arm frequency capped
    2          4    Currently throttled
    3          8    Soft temperature limit active
    16     10000    Under-voltage has occurred
    17     20000    Arm frequency capping has occurred
    18     40000    Throttling has occurred
    19     80000    Soft temperature limit has occurred
What else does it do?


It's not running as a hypervisor. A hypervisor is specifically something that hosts VMs. The OS running on the ARM core is not running in a VM.


Seem similar to the CSME on Intel platforms

https://igor-blue.github.io/2021/02/04/secure-boot.html#earl...


Someone need to kickstart CM4 carrier board with multi-port Ethernet. The CM4 can support 5 Gpbs throughput and having 4-port Ethernet will make a really cheap and nice home router [1].

[1]https://youtu.be/KL0d68j3aJM


Check out https://pipci.jeffgeerling.com/boards_cm — there are a couple multi-port boards already in the works.

Also, if you go to the main page on that site, I have tested multiple multi-port PCIe cards, like a dual 2.5G card and a 4-port Intel i340, and all of them can be made to work without too much effort.

There are a few different things you can do with it, but in total, you'll be limited to about 3.2 (only through PCIe) to 4.2 Gbps (if also using the Pi's internal 1G ethernet) of internal bandwidth.


> you'll be limited to about 3.2 (only through PCIe) to 4.2 Gbps (if also using the Pi's internal 1G ethernet) of internal bandwidth.

Is that the total full duplex bandwidth available or just the "one way" bandwidth?


Full duplex as far as I can measure (in various scenarios). I'm still wrangling a couple PCIe switches that were giving me trouble previously, so I will definitely keep trying to get more than that!


It would be interesting to see a barebones ethernet switch board with some L2/L3 features (VLAN, LACP, etc.) on the switch chip in the data plane, interfaced to the CM4 as the brain and management plane, as something with more programmability than commercial lower-end switches (same price range as Mikrotik/Ubiquiti gear).

I wonder how programmable/open/capable low-cost ethernet switching chips are these days.


From the follow on of the provided Twitter link in Jeff's website and based on the latest status updates, it seems that the 4-port Ethernet CM4 carrier or Raspberry Router from Blinkinlabs is taking shape now [1].

It uses KSZ9897 switch and LAN7431 PCIe-to-Ethernet adapter from Microchip, and the former does supports VLAN and LACP [2]. The fact that the switch chip can supports up to 7-port Ethernet thus it is possible to make a version of Raspberry Router that can be used in company setting (Mikrotik's customers) in addition to home networks.

[1]https://mobile.twitter.com/cibomahto/status/1364520950763573...

[2]https://www.microchip.com/wwwproducts/en/KSZ9897


That's some extremely slow performance for all the money spent to add NVME to the Pi.

I get that there's a fun factor involved with raspberry pi stuff, but I wish the performance to price was better. I'm better off buying off-lease USFF pulls or something instead for anything I do, they cost less and generally use about the same power relative to performance.


The interesting thing for me has been the curve of performance/dollar over the past decade since the first Pi came out.

The gap was huge back then, but the Pi 4 is now getting competitive for some use cases (especially if also considering power efficiency for lighter usage patterns), and I can see a time where the value gets even better depending where things end up with a Pi 5 or 6 in the next few years...


It's definitely interesting to see their progress. They are fairly rapidly catching up to the minimum performance needed to run most typical users apps.


I hope there's a future Pi with more PCIe lanes. 4 would prevent the PCIe bus from being a bottleneck. Or it could have USB 3 and NVMe, maybe even a 2.5GBPS ethernet.


What do you think about these stackable computers? https://m5stack.com/

They look nice to me but I don't know much about this area.


I have one, and I've only been toying around with it, but I've been fairly impressed. The hardware is very nice - polished, while still keeping everything accessible. The software is pretty decent also. The documentation leaves something to be desired.

That said I don't see it as completely overlapping with the Pi. The ESP32 in the M5Stack is a much less powerful system in terms of performance, memory, storage, etc. It's not running Linux, so your code is much closer to the metal. That can be both good and bad depending on what your goal is: you have much more control over the hardware but it limits your choice of programing languages and what existing software you can depend on.


I actually just bought one of these! I'm working on a project where I need a battery, IMU, speaker, and quite a bit of processing power (for real time audio), and their m5 core2 aws IoT product met all those requirements for only 40ish dollars.

I only just got it, but it took me only a couple straightforward hours to set up the dev environment and compile and flash their sample hello world project. (I haven't done anything with esp32 before).

I don't think it's analogous to the rpi, though (which I haven't used) --- you can't put Linux on the m5stack products because they're meant for much simpler tasks.


Not a fan of hijacking scrolling on web pages in general, but they really nailed that stacking animation!


Few weeks ago I was looking for something like this. If there was anything similar, it was always sold out or did never launch, I don't expect better availability with item. And if they ask for $100 which is way too high and would give me better boards, pls provide two M2 slots.


If you're going to split a 1x PCIe lane into two M2 slots, you might as well save the cash and get a Pi4 and use two USB3 thumbdrives instead. The performance would be about equal by that point.


Ofc I'm aware of this and this is what I did, I just want a nice form factor.


The PI 4 only has a single PCIe lane. Two M.2 Slots would require a PCIe switch which would make things more complex and expensive, as well as limiting the overall performance if you were trying to use both drives simultaneously.


> Or you can plug in a USB 3.0 SSD and get decent speed, but you end up with a cabling mess and lose bandwidth and latency to a USB-to-SATA

Is the bandwith loss significant between USB 3.0 and SATA (for an average SSD)?


The form factor of the RasPi is a large selling point for me, and to have more than 1tb, it’s nice to see an option that doesn’t involve a cable or dongle plugged out of the side of the case next to the NIC port. I’m excited to try using this with RaspiBlitz, which requires at least 400 gigs for the blockchain alone.


If you like duct taping things to each other, no :)

Personally I'm appalled when people say it's easy to add peripherals to a Pi via USB. Maybe some deep fear of octopuses?

Also note that the resulting system in TFA will lose usb 3 capability, because there's only one pcie lane and you can use it for usb 3, nvme but not both.


For an average SSD - no. It's pretty much multiple times faster than other options.


That's a nice product shot, Jeff.


He actually just released a video about that photo! https://www.youtube.com/watch?v=pinYVZxBF2Q


One extra step that's missing from the blog post's photos is to take the focus stacked result and replace the background with pure white.

You can notice the inconsistent white balance throughout the otherwise nice product shots.


Heh, I don't always notice since I use my site in dark mode and don't see the slight difference from the pure white background...

I'm just lazy sometimes.


Oh, no worries. Love your work!


This is exciting. Drive performance and the discussed clunkiness of external drive connections have been something of a chore. This makes me excited to pick up some projects again


> Apparently this week NVMe SSD Boot was just added as a beta feature in the Pi firmware, though it requires a bit of a process to use it

Anyone know if you can boot from NVMe on the Pi if you first bootstrap it through an UEFI shim [1]?

[1] https://github.com/pftf/RPi4


The case looks great, like an ASUS Chromebox...which I always thought looked nice.


Heh, I guess I did well for the first ever 3D model I made :D

Here it is on Thingiverse: https://www.thingiverse.com/thing:4786257


But for an OS you don't need it as the RPI CM3/CM4 can already be equipped with eMMC. It's only useful if you really need more than 32GB.

https://www.cytron.io/p-raspberry-pi-cm4-with-wireless-pick-...

https://www.cytron.io/p-raspberry-pi-cm4-without-wireless-pi...


Or if you want something more performant and reliable than eMMC.


Exactly. There are so many use cases for the RPi that very often people wonder why you’d ever want it to support X. When the people that want feature X have a completely different use case in mind.

Embedded people don’t need faster IO, so why use it? If you want to use it as a desktop, you might not need eMMC, but an M.2 slot would be critical.

It’s really an interesting ecosystem of uses.


Do you know if anyone makes a back-plane/carrier board (not sure on what the actual name is) for hosting/holding multiple CM4? I found a few for the CM3 but they seem a bit cost prohibitive.


There is at least one other multiple-CM4 board 'in development', though I can't share any detail, other than keep an eye on this page for when it might appear: https://pipci.jeffgeerling.com/boards_cm


Not affiliated with the project in any way, but I do know this one is launching soon! https://turingpi.com/

Edit: Spoke too soon, only works with the CM3/3+. Leaving it here in case anyone is interested.


They have a v2, thanks to the user who commented after you! https://turingpi.com/v2/



Does the eMMC get corrupted as easily as the SD cards do?


SD cards don't get corrupted easily, if you buy the right ones. (Unfortunately most people don't, and end up blaming the Pi for being unreliable.)


Well, it's not easy. Brand name doesn't get you anything in this department, you have to track down "industrial" SD cards.


On top of that, it's 50/50 in my experience as to whether the fault was the microSD card, or a weak / insufficient power supply (that's the main reason, I think, the Pi Foundation now makes their own).


> if you buy the right ones

No offense but that's true of everything and incredibly unhelpful. As others have pointed out, you can't rely on name brand or even product line.

This isn't an issue isolated to Pis either, GoPros, among other devices, are notorious for eating microSD cards as well.


Not nearly as much. eMMC is designed for these use cases.


Modern Rasperry Pi's supports PXE booting.


The only problem with PCIe is you cannot (generally) power them independently, that's why SATA is better.


One other consideration is I've found (at least on the Pi) CPU load is increased when performing heavy read/write activity. Certainly a little more than when I do the same thing through a SATA controller or testing through a Broadcom hardware RAID adapter.

And that CPU load gets a little into the worrisome territory when I put multiple NVMe in a RAID (which I'll be documenting soon!).


I'm glad to have you poking the ARM space because we're not getting the hardware we need for home hosting.

AWS/GCP really need that competition if we want some sort of distribution for the internet at large.

I'm still going with Raspberry 2 with 512GB/1TB SanDisc for my live database cluster (redundant not sharded!) as things are now.

But really the problem is watts per high quality GB of disk, as in rewriteable bits >1000 times:

SLC, MLC, TLC and now QLC (for those uninitiated: single, multi; double really, triple and quad; four layer); it's downhill from here on!


Is the CPU load attributable to some NVMe overhead or is it just that NVMe is faster than SATA and thus the CPU is being more efficiently utilized?


That's a good question, and something I haven't spent enough time benchmarking yet to give an intelligent answer to.


I wouldn't say that's a problem. I'd say it's a balancing act. For some purposes yes you want a separate power supply. But for others it's handy to not need a separate power supply.


True, I meant for server duty... I need to be clearer when I comment!


Looks like pci-e is not much better here than USB 3.0 attached SSD. Just tried with 250GB samsung evo + JMS578 and that gives:

dd if=/dev/sda of=/dev/null iflag=direct bs=16M count=100 1677721600 bytes (1.7 GB, 1.6 GiB) copied, 4.17517 s, 402 MB/s

And that's also going over a USB3.0 HUB in addition.


You seemed to miss random access I/O speed (4K reads/writes), which usually matters quite a bit more than sequential access.

Random reads almost doubled even with a cheap SSD and random reads nearly tripled.


There's no test methodology, so it doesn't matter. Is it measuring raw block device I/O or filesystem I/O, which filesystem, is it submitting I/O requests sequentially or in batches so that the OS can queue multiple requests at once, etc. etc. There's nothing to compare against, so I didn't even try running the test.

USB 3.0 has no trouble with high random I/O either, anyway.

Article is comparing different SSDs, not just different interfaces, so random 4k block read speeds differ. But it doesn't show what's possible with good USB 3.0 SATA controller and good SSD.

I just wanted to point out that speed difference doesn't have to have much to do with the interfaces themselves.


Another question: is that MB/sec or MiB/sec? Note that the 400 MiB/sec benchmark is 420 MB/sec.

Not a _huge_ difference, but it is a difference ;)

The other major difference is anything non-sequential, especially things like 4K reads. This is where the PCIe->USB->SATA (or PCIe->USB->NVMe) adapter overhead really slows things down.

Try out my benchmark script and see how it fares: https://github.com/geerlingguy/raspberry-pi-dramble/blob/mas...

(Thank the marketers for making everything confusing with MiB and MB...)


Dunno, looks too complicated, and our results will not be comparable, because my block devices use LUKS encryption, and this doesn't measure raw block device access.

I wrote this quick benchmark tool, maybe try it. It does test raw block device 4k random reads via random batches of 128 reads submitted via io_uring:

  https://megous.com/dl/tmp/t.c
  (reduce ROUNDS to 5 or 10 if you test something non-SSD)

  gcc -o t t.c -luring
  ./t /dev/sda
On my workstation:

- My HDDs show 4k read iops in the range of: 126-139

- My samsung 980 pro nvme hovers around 160-180 kIOPS

- Samsung 840 PRO SATA : 95 kIOPS

- Samsung 860 EVO SATA : 90 kIOPS

On my Orange Pi 3:

- USB 3.0 attached intel SSD : 4k IOPS = 15393

- USB 3.0 attached samsung 850 EVO : 4k IOPS = 43154

- Sandisk ultra A1 uSD card 09/2018 model: 4k IOPS = 2228

- Sandisk ultra A1 uSD card 10/2019 model: 4k IOPS = 2512 (they are improving over time :))

So that's about 60MiB/s and 168MiB/s random 4k reads over USB 3.0. (controller supports UAS)

Is it hitting the SSD or interface limit? Dunno. I don't have the exact same samsung ssd model connected via SATA anywhere. But it's much more than the values in the article, so the values in the article are certainly not the USB 3.0 interface limits for 4k random reads.

dd is outputing non-SI units in this case, so it was around 392MiB/s


I'm curious about CPU usage for the two adapter types. My experience (quite a while back...) was that USB required a bunch more involvement from the CPU than the simpler block-oriented drivers.

(Hubs don't affect things much; on the order of tens of nanoseconds of additional latency when things line up properly).


Overclocking the Pi's CPU _does_ increase the speed of some operations, for sure. For example, in my network tests, I can get 1.9 Gbps on a 2.5G adapter with MTU 1500 at 1.5 GHz (default Pi 4 clock), or 2.2 Gbps at 2.0 GHz.

Using MTU 9000 lets me get over 2.3G on that adapter.


For USB 3.0 at 43k IOPS load - https://megous.com/dl/tmp/caf84c98b0a67db8.png

It's probably not yet limited by the CPU, but close. This is much slower (than RPi4) Orange Pi 3.


> Looks like pci-e is not much better here than USB 3.0 attached SSD.

We shouldn't expect it to be. A good USB 3.0 hub shouldn't add much in the way of latency.

I probably wouldn't bother, but M.2 is still a very cool thing that some people seem to want...


I heard you liked boards.


This is the 1st thing I thought when Apple announced M2 apu.. This is gonna be confusing.


Given that "Raspberry Pi" or "RasPi" and its many other permutations are never mentioned alongside "Apple", I think it's going to be safely distinctive with or without "M2" by its side.


Or, simpler still, M.2 is not the same as M2. (Now I am having an ApplePi nightmare...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: