My impression was that autoupdate was not the default because the devices it runs on only have so many resources, and there's a non-trivial chance of bricking the device (given how many devices are supported)? It's not like other vendors are doing any better in this space (and I've seen enough things in the "IoT/embedded" space brick themselves with updates to be a bit wary of autoupdates).
Auto-update is also a bad idea unless you can make it really secure, which is hard to do on devices so constrained they don't even have a clock to keep track of what day it is to judge whether a certificate is still valid.
Minimizing the chance of bricking the device with an automatic update requires at a minimum having two copies of the OS, so that the running copy isn't trying to modify itself and can remain as a fallback in case of a broken update. That's not too challenging these days now that most routers are using NAND flash, but for a long time it was common to use very small NOR flash modules with the absolute minimum capacity.
Updates don’t currently have a way to ensure that user installed packages have their configurations updated appropriately, so user installed packages may break on update. Additionally, as a sibling comment pointed out, official images don’t include user packages, so you’d either need a scalable way to build custom images or the updater would need to be smart enough to reinstall packages after update.
It would still be nice to have an official automatic update feature that is opt-in for stock systems.
You also need to rebuild the firmware with the installed packages. Otherwise you end up without your packages installed. That requires a server to build the firmware for your device. Doing this automatically for everyone is resource intensive.
I’ve tried it in the past. This was a few years ago, so it’s possible it’s changed since then. But the reason I’m not choosing it for myself today is that it relies on either Sign in with Google (fine) or magic links to verify the user. I really don’t want to manage email delivery for this project, which is admittedly a stubborn personal choice. It just adds a lot of complexity that I don’t care to spend time on for hobby projects.
Then why rewrite coreutils in rust? TOCTOU isn't exact some new concept. Neither are https://owasp.org/Top10/2025/ (most of which a good web framework will prevent or migrate), and switching to rust (which as far as I know) won't bring you a safer web framework like django or rails.
1. Rust is a much more pleasant language to work with.
2. You can improve the tools, adding new features, fixing UX paper cuts etc.
You're probably thinking "you can improve the GNU versions!" and in theory sure. But in practice these sorts of tools are controlled by naysayers who want everything to stay as it was in the 80s. The sorts of people that only accept patches via git send-email to a mailing list.
Hahaha I just looked up GNU Coreutils and not only do they blame poor UX on the user ("Often these perceived bugs are simply due to wrong program usage.") but they even maintain a list of rejected feature requests:
Another maintainer and I follow issues and pull requests on a GitHub mirror. But email works fine for us and many other projects.
Regarding poor UX, it is difficult to dispute with that claim without a specific example. Note that a lot of the features we support are standardized by POSIX. Even if we dislike the behavior, it is better to comply with the standards so the programs don't behave differently than users expect. The sentence you quote isn't meant to put down users. These programs are often much more complex than meets the eye, and there are lots of common gotchas that people have run into (and will continue to do so) [1].
Of course we would love for these programs to be useful for everyone. However, feature requests are often incompatible with existing behavior, incompatible with other feature requests, or have existing functionality elsewhere. For those reasons we cannot accept every feature request.
Have you used busybox? The BSDs? I'm not sure adding more features to coreutils is a major help, and given rust-coreutils/uutils has:
1) more CVEs between two latest Ubuntu releases than coreutils has had over the last 30+ year
2) managed to break security updates
3) is neither fully compatible with POSIX nor coreutils
I'm not sure why I'd ever use it? Sadly, projects like uutils have made me suspicious of rust projects, so unless I know that the project is well maintained (for which there are numerous examples, ripgrep being the obvious example, but newsboat, the various tools from proxmox, servo/firefox, and the pgrx ecosystem are ones I use regularly), it's a negative marker against that project.
I would suggest the current system fails to efficiently choose (as you have to align multiple pathways, like updates, "manual" installs, adding new packages), and so effectively there's only the illusion of choice. Switching instead to a queue not only means that there's time for QA/security scans, but it's much easier to make the choice to speed up than slow down.
I think based on what I think is the author's comments here "Flatpak on top of immutable distros are the future of Linux"? Given that context, I can see how the author produced the text.
As an aussie, I'd say instant is never good, it's the minimum acceptable coffee. If your coffee is worst than instant (yes, you LAX, how do you make coffee taste like literal dirt water!), then you should learn to make coffee properly!
reply