Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
Thing is, these tools are so critical that even one error may cause systems to be compromised; rewriting them should never be taken lightly.
(Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)
Is formal verification able to find most of these issues? I'm no expert on formal analysis, but I suspect most systems are not able to handle many of these errors. It seems more likely that the system will assume the file doesn't change between two syscalls - which seems to be the majority of issues. Modeling that possibility at least makes the formal system much harder to make.
When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.
Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
I think that legitimate real world issues in rust code should be talked about more often. Right now the language enjoys a reputation that is essentiaöly misleading marketing. It isn't possible to create a programing language that doesn't allow bugs to happen (even with formal verification you can still prove correctness based on a wrong set of assumptions). This weird, kind of religious belief that rust leads to magically completely bug free programs needs to be countered and brought in touch with reality IMO.
Nobody believes Rust programs are but free, though. Rust never promised that. It doesn't even promise memory safety, it only promises memory safety if you restrict yourself to safe APIs which simply isn't always possible.
Or... the NSA wants you to think that the NSA wants you to think that the NSA believes that Rust is a memory-safe language, so that everyone who distrusts the NSA keeps using C.
Is it possible you’ve misunderstood what Rust promises?
> It isn't possible to create a programing language that doesn't allow bugs to happen
Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.
Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.
On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this and that any project that does not use Rust is doomed and that any of the existing decades-old software projects should be rewritten in Rust to decrease the chances that they may have bugs.
What is described in TFA is not surprising at all, because it is exactly what has been predicted about this and other similar projects.
Anyone who desires to rewrite in Rust any old project, should certainly do it. It will be at least a good learning experience and whenever an ancient project is rewritten from scratch, the current knowledge should enable the creation of something better than the original.
Nonetheless, the rewriters should never claim that what they have just produced has currently less bugs than the original, because neither they nor Rust can guarantee this, but only a long experience with using the rewritten application.
Such rewritten software packages should remain for years as optional alternatives to the originals. Any aggressive push to substitute the originals immediately is just stupid (and yes, I have seen people trying to promote this).
Moreover, someone who proposes the substitution of something as basic as coreutils, must first present to the world the results of a huge set of correctness tests and performance benchmarks comparing the old package with the new package, before the substitution idea is even put forward.
The only language I've ever seen users make that claim for is Haskell. Rust users have never made the claim, but I've seen it a lot from advocates who appear to find "hello world" a complex hard to write program.
Where are these rust fans? Are they in the room with us right now?
You’ve constructed a strawman with no basis in reality.
You know what actual Rust fans sound like? They sound like Matthias Endler, who wrote the article we’re discussing. Matthias hosts a popular podcast Rust in Production where talks with people about sharp edges and difficulties they experienced using Rust.
A true Rust advocate like him writes articles titled “Bugs Rust Won’t Catch”.
> Such rewritten software packages should remain for years as optional alternatives to the originals.
> must first present to the world the results of a huge set of correctness tests and performance benchmarks
Yeah, you can see those in https://github.com/uutils/coreutils. This project has also worked with GNU coreutils maintainers to add more tests over time. Check out the graph where the total number of tests increases over time.
> before the substitution idea is even put forward
I partly agree. But notice that these CVEs come from a thorough security audit paid for by Canonical. Canonical is paying for it because they have a plan to substitute in the immediate future.
Without a plan to substitute it’s hard to advocate for funding. Without funding it’s hard to find and fix these issues. With these issues unfixed it’s hard to plan to substitute.
Those Rust fans exist on almost all Internet forums that I have seen, including on HN.
I do not care about what they say, so I have not made a list with links to what they have posted. But even only on HN, I certainly have seen much more than one hundred of such postings, more likely at least several hundreds, even on threads that did not have any close relationship with Rust, so there was no reason to discuss Rust.
Since the shameless promotion with false claims of Java by Sun, during the last years of the previous century, there has not been any other programming language affected by such a hype campaign.
I think that this is sad. Rust has introduced a few valid innovations and it is a decent programming language. Despite this, whenever someone starts mentioning Rust, my first reaction is to distrust whatever is said, until proven otherwise, because I have seen far too many ridiculous claims about Rust.
Could you find one such person on this thread? Someone making ridiculous claims about what Rust offers.
I’ll tell you what I think you’ve seen - there are hundreds of threads where you’ve seen people claim they’ve seen this everywhere. That gives you the impression that it is universal.
I understand the (narrow) hard guarantees that rust gives. But there there are people in the wider community who think that the guarantees are much, much broader. This is a pretty widespread misconception that should get be rectified.
I have never seen a comment claiming that Rust leads to magically completely bug free programs.
Could you please link one? Because I doubt it exists, or if it does, it is probably on some obscure website or downvoted to oblivion.
On the other hand, I see comments in every Rust thread that are basically restatements of yours attacking a strawman.
The reality: Rust does not prevent all bugs. In fact, it doesn't even prevent any bugs. What it actually does is make a certain particularly common and dangerous class of bugs much more difficult to write.
The "elimination of bugs" is not synonymous with "the elimination of all bugs". The way you're presenting it, any single bug in a rewrite would be grounds to consider the the entire endeavor a failure, which is a ridiculous standard.
There are plenty of strong arguments to be made against rewriting something in Rust, but this is a pretty weak one.
Because the bugs were caused by programmer error, not anything inherent to rust. It was more notable due to cloudflare being a critical dependency for half the internet, but that particular issue could've happened in any language.
This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.
Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.
> caused by programmer error, not anything inherent to Rust
Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.
The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.
The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.
The cloudflare bug was the equivalent of an uncaught exception caused by a malformed config file. There's no recovery from a malformed config file - the software couldn't possibly have done its job. What's salient is that they were using an alternative to exceptions, because people were told exceptions were error-prone, and using this thing instead would make it easier to write bug-free code. But don't do the equivalent of not catching them!
And then, it turned out to not really be any better than exceptions.
Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.
That was why I brought it up. I wasn't trying to be snarky or haughty. Thank you for filling in the gaps, I should have done that instead of the 1-liner.
I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.
Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.
What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.
For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.
If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.
As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.
Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.
You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.
But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.
No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.
This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".
No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.
Most consumer platforms only allow up to 128/256GB of RAM. If you want more you likely need a data centre platform. This is again a mismatch between what companies think consumers are at and the reality.
I think e.g. AMD missed the boat with 9950x3d2 by limiting memory controller. If it was possible to hook it with 1TB of consumer DDR5 RAM, that would be something to write home about.
Some people, including myself, loathe Nvidia with the fiery burning passion of a thousand suns, and will put up with whatever nonsense is necessary to run without them.
Can you provide a different source on that? The govcloud page you've linked says operated by US citizens, not built by US citizens. I'd be pretty surprised if they did the latter. Standard practice as I understand it is to simply run the standard software in a separate environment. A recent Propublica report [0] pointed out that Microsoft was hiring citizens to escort the actual engineers that aren't citizens, for example.
ASML is in Europe because Philips/NXP is in Europe. The fundamental research EUV is based on was done in the US, then made available to industry via a holding company called EUV, LLC. Intel tried to bring in Canon and Nikon, but was rejected by the US government. Two others were approved, ASML and an American company called SVG. ASML spent a couple decades doing the incredibly hard work of productizing EUV and acquired SVG along the way. Canon and Nikon tried playing catch-up with an Asian consortium, but arrived late to the party.
My experience is that if you write an interface that (rarely) returns NaNs, someone will use it assuming it's never NaN no matter how good the docs are. Then their code does bad things and you have to patiently explain why they're wrong and yes, they are holding isnan() wrong (in C/C++).
When such users are expected, there exists only one solution.
Do not mask the invalid operation exception, which was actually the original recommendation of the IEEE standard, which was that the default behavior should be to mask all exceptions, except the invalid operation exception.
When the invalid operation exception is not masked, NaNs are never generated and any NaN present in the input data will generate an exception, which will abort the program, unless the exception is handled.
This behavior avoids the bugs caused by careless programmers. Unfortunately, the original suggestion was not adopted by most programming language implementers, so nowadays the typical default setting is to have all exceptions masked. When the programmers also omit to handle the special values, bugs may remain unnoticed.
Special values need not be handled everywhere, because infinities and NaNs will propagate through many operations, so they will remain in the final results. But wherever a value is not persistent, but it is used in some decision and it is discarded after that, special values like NaNs must be handled correctly.
How many checks are we talking? A well-implemented monotonic system should be able to do tens of thousands of these checks (or more) in the time budget I associate with a heavy page, and start before the entirety of the permissions/feature data is available.
You don't necessarily have to choose one or the other. The Blue lagoon in Iceland is a famous example. The water comes from a power plant nearby.
You're not required to site them that close either, because of how regional the conditions usually are. A couple miles plus or minus doesn't change things too much.
It shouldn't. It's been extensively documented among modern human groups.
The major question is how much our understanding from recent forager groups applies to pleistocene foragers ("ethnographic analogy"). I'm in the generally skeptical camp. Many other anthropologists aren't, particularly those in older generations.
The Pleistocene lasts from 2.58 million years ago, maybe the first time our ancestors figured out tools, to 11,000 years ago, when we Homo sapiens had been around for ~200,000 years. Isn't that too wide a range of humans and ancestors to characterize in one group?
Are you skeptical about 11 kya ancestors doing similar things? Why?
Isn't that too wide a range of humans and ancestors to characterize in one group?
Yes, that's one reason why I have high standards for arguments from ethnographic analogy.
Are you skeptical about 11 kya ancestors doing similar things? Why?
Because modern forager groups have survived for centuries on the margins of colonial states. The environment they inhabit is very different from late pleistocene humans and we should default to skepticism in the absence of other evidence.
>It's been extensively documented among modern human groups.
Do you have some sources? A quick search doesn't pull up much evidence for current hunter-gatherer dependence on natural fire regime. Or you mean anatomically modern humans?
Yes, Tasmanians are the best example that comes to mind. They had a mythology developed around lightning and subsequent fires and would then try to keep a fire going as long as possible.
There's a large subpopulation of people flying who seem to have no idea how planes and airports work. Maybe they're sleep deprived or it's their first time flying, but these announcements are targeted at them.
I think its more likely that the people do know they just don't care and it helps them to put their backpack overhead so they do it anyways. There is minimal/no enforcement.
I'm very much a we-live-in-a-society, follow the rules kind of guy, but if I checked a bag and only have my backpack in the cabin, you bet your ass I'm going to try and find a place for it in the overhead instead of cluttering up where I want to put my feet. The flight attendants can go scold the passenger with the oversized roller + backpack + 20 liter "purse" instead.
Yes, the logical rule would be 1 bag in the overhead per person. If they enforced carry-on sizes strictly and charged less for checked luggage the problem would probably go away.
It has nothing to do with price. I don't check luggage on domestic flights because of the enormous time lag for the airport to give me back my luggage. (There's also "United Breaks Guitars", but that's an independent problem)
If I could walk from the plane to the luggage area and my luggage was already there 90% of the time, I probably would check more things.
However, the US airports simply don't employ enough people to move the luggage around fast enough.
The is 100% correctable by employing more people. But some CEO needs another yacht, so they don't. So, I simply don't check luggage.
I remember one time I had to fly back from a business trip on the Wednesday before Thanksgiving. Made me realize there is something about business travelers, they cut towards situationally aware and self conscientious types. The opposite of people flying the day before Thanksgiving.
I flew into the Orange County Airport before they tore it down and made it like the others. Felt very civilized. As I get older I find the hostile public spaces and infrastructure more and more annoying.
Unfortunately there's also a large subpopulation of people flying who wear noise-cancelling headphones and have their eyes glued to their phones; choosing to be disengaged from their immediate surroundings.
I think it's important to preserve the integrity of the Venus surface for when we have substantially mined its atmosphere. That's when Venus could begin to present an opportunity for limited human habitability. We'd then have to use mirrors to redistribute sunlight evenly and boundedly across its slowly-rotating lit and dark sides. We could try to create an artificial ozone layer to block UV. We'd still have to neutralize the residual corrosive substance (unreacted chlorine, sulfur, fluorine) using water and alkali chemistry and scrubbers. The raw materials for alkali would come from the planet itself. Water would have to be imported from the space economy, probably from asteroids, also as hydrogen from Neptune's atmosphere using fusion powered lift.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
reply