Hacker News .hnnew | past | comments | ask | show | jobs | submit | uecker's commentslogin

No, angelic non-determinism is not related to the as-if rule. It essentially says that if there is a choice to assign provenance on backconversion from integers, the one which makes the program valid is assigned. This is basically the same as the explicit UDI rule in TS 6010, except that this is rule is very clear. The problematic with angelic non-determinism is two-fold: a) most people will not be able to reason about it at all, and b) not even formal semantics experts know what it means in complicated cases. Demonic non-determinism essentially means that all possible execution must be valid while angelic non-determinism that there must exist at least one. Formally, this translates to universal and existential quantifiers. But for quantifiers, you must know where and in which order to place them in a formula, which wasn't clear all from the wording I have seen (a while ago). The interaction with concurrency is also a can of worms.

I don't think there is a fundamental advantage to Rust regarding provenance. Yes, we lack a way to do pointer tagging without exposing the provenance in C, but we could easily add this. But this is all moot as long as compilers are still not conforming to the provenance model with respect to integer and pointer casts anyway and this breaks Rust too! Rust having decided something just means they life in fairy tale world, while C/C++ not having decided means they acknowledge the reality that compilers haven't fixed their optimizers. (Even ignoring that "deciding" means entirely different things here anyway with C/C++ having ISO standards.)


> But this is all moot as long as compilers are still not conforming to the provenance model with respect to integer and pointer casts anyway and this breaks Rust too! Rust having decided something just means they life in fairy tale world, while C/C++ not having decided means they acknowledge the reality that compilers haven't fixed their optimizers.

I think this is a bit of a mischaracterization. While there can of course be bugs in LLVM (and rustc and clang), what sort of LLVM IR you generate matters. To be able to generate IR that conforms to the provenance model of the language you first need to have such a model.

As far as I know (and this matches what I found when search the rust issue tracker) there is currently one major known LLVM bug in this area (https://github.com/rust-lang/rust/issues/147538) with partial workarounds applied on the Rust side. There is some issues with open question still, such as how certain unstable features should interact with provenance.

I think calling the current situation "fairy tale world" is a gross exaggeration. Is it perfectly free of bugs? No, but if that is the criteria, then the entirety of any compiler is a fairy tale (possibly with the exception of some formally verified compiler).


I am not sure this is a mischaracterization. The C provenance model also exists, even as a form of an ISO TS. The Rust model copied the basic concepts and even the terminology from us. The reason the C model is is not in ISO C 23 but in a separate TS is because compilers are not able to implement correctly at this time due to bugs. But neither do they implement the Rust model correctly because of the same bugs.

One should also point out that basic provenance is already part of the ISO C standard for a long time (but not under this name). That a precise technical specification is needed is only because the exact details were not clear and there are inconsistencies and differences between and even inside compilers. Rust having a precise model does not make these problems automatically go away just as the ISO TS does not.


But LLVM's optimizations aren't sound and this affects Rust too.

Huh? Which optimizations?

LLVM is quite sure that, for example, two pointers to different objects are different. That's true even if in fact the objects both lived in the exact same spot on the stack (but at different times). That's... well it's not what Rust wants but it's not necessarily an unacceptable outcome and Rust could just ask for their addresses and compare those...

Except it turns out if we ask for their addresses, which are the same integer, LLVM remembers it believed the pointers were different and insists those are different too.

Until you call its bluff and do arithmetic on them. Then, in some cases, it snaps out of it and remembers that they're identical...

This is a compiler bug, but, apparently it's such a tricky bug to fix that I stopped even looking to see whether they'd fixed it after a few years... It affects C, C++, Rust, all of them as a result can be miscompiled by a compiler using LLVM [it's easiest to demonstrate this bug with Rust but it's the same in every language]. But as you've probably noticed, this doesn't have such an enormous impact that anybody stopped using LLVM.


If there only existed a standardized protocol...

This is very cool. I had a lot of fun doing C++ template meta programming two decades ago and the language got a lot more interesting. Just realize that you can waste a huge amount of time without anything doing remotely useful.

(And please don't use any of it in any professional context.)


It is strange to use lightdm and gdm as examples, which are both written in C (if nothing has changed recently).

Considering that for electricity production gas is imported, but this amount is stable over decades in Germany and a small fraction of overall gas use, and not imported from Russia anymore, I would say this is nonsense. Did the US stop importing from Rosatom btw?

"We were quite amazed how good those early projections were, especially when you think about how crude the models were back then, compared to what is available now,” https://news.tulane.edu/pr/study-finds-sea-level-projections...

Yeah you have to literally stick your head in the sand to deny what's happening. The models are shit. They suck, and yet the degree to which they are more accurate than not should be enough to convince you.

The CO2 hypothesis was made in the 50s, long before there was conclusive evidence, and yet we are right on the predicted trajectory.


That CO2 emissions will cause warming was predicted first by Arrhenius Svante in 1896. While accurate modelling of all effects may be difficult, the basic effect follows from fundamental physics. There is really no excuse for doubting this.

The basic ideas go back even further than that. Eunice Newton Foote's "Circumstances Affecting the Heat of the Sun's Rays" was published in 1856.

https://en.wikipedia.org/wiki/Eunice_Newton_Foote


The problem of packages not being packaged is not solved by more packaging systems.


The problem of libraries not being packaged is solved by distro-agnostic packaging systems. Why do you think everyone uses PyPI, Cargo, Go modules, NPM, etc. instead of this insane "package your app and all of its dependencies for every distro" idea? Pure lunacy.


It is not difficult to package for the most important distro (the others usually import from them). Those distro-agnostic packaging systems are popular because they basically have no quality control at all, so it is basically no effort to package for them, just register a github repository somewhere. But this is also why they are full of garbage and have severe supply chain issues.


> But this is also why they are full of garbage and have severe supply chain issues.

I don't think the solution to supply chain problems is "just make supplying things so shitty and annoying that nobody will bother". Not a good one anyway.


It's part of my job to ship software on Linux, and if anything the part making things "shitty and annoying" is when we have to deal with people/'suppliers' (exclusively) using "foreign" packaging methods.

We had 2 dependencies like that; one we talked to the creators about and helped them ship distro packages, the other we just got rid of. Life is nice now.


I do not think packaging for a distribution is annoying. If you do not bother because you do not want to match some minimal community standards, maybe your software should no be packaged? And if you think the tooling can be improved, then why not invest there? But this does not justify the existence alternate packing systems.


sure, it is solved just by one with good packages coverage.


C has its unique advantages that make some of us prefer it to C++ or Rust or other languages. But it also has some issues that can be addressed. IMHO it should evolve, but very slowly. C89 is certainly a much worse language than C99 and I think most of the changes in C23 were good. It is fine to not use them for the next two decades, but I think it is good that most projects moved on from C89 so it is also good that C99 exists even though it took a long time to be adopted. And the same will be true for C23 in the future.


I know this a four day old comment, but based on your post history I think you are probably the best person to ask to be more specific. So, you start out stating “C has its unique advantages”, an assertion I agree with but more for ‘vibes’ than because I can articulate the actual advantages (other than average compilation times). If you see this I would love to hear your list of C’s unique advantages.


My list of advantages:

  - long-term stability: the code I wrote two decades ago is still valuable to me
  - short (!) compilation times: this is a huge productivity boost
  - the language is very explicit: one can see and understand what is going on
  - mature tooling: good compilers and many other tools
  - simplicity and elegance: the language is small and not much I need to remember
  - portability: supported on many different systems
  - performance: C code can be very performance and usually it is without effort
  - lean: there is no bloat anywhere
  - interoperability: it can interoperate with everything
  - no lock-in: it does not lock you into specific frameworks or ways to do things
  - maintained using an open and (relatively) fair process: not controlled by specific companies
  - what needs to be done, can be done: there are never showstoppers

I should also talk about disadvantages and which I think are real and which are not, or not serious

  - no abstractions: i think this is completely wrong, one can build great abstractions in C
  - no package manager: IMHO languages should not have packager managers
  - difficult to learn: I think this wrong (but see below)
  - safety: partially true, but there are ways to deal with this effectively (also the advantages of alternatives are exaggerated)
  - out-dated: partially true with respect to the library
  - weird syntax: partially true, but hardly serious
  - not enough investments in tooling: rarely discussed, but I think this a problem (tools are mature, but still a lot could be improved)
  - difficult for beginners: the out-of-the-box experience is bad. one needs to learn how to do stuff in C, find good libraries, etc.

I don't think so. As a contributor to GCC, I also wished it hadn't.


Why do you think so?


For two reasons: First, where C++ features are used, it make the code harder to understand rather than easier. Second, it requires newer and more complex toolchains to build GCC itself. Some people still maintain the last C version of GCC just to keep the bootstrap path open.


I'm very far from compiler development, but in my experience, while C++ is hard to read, the equivalent C code would be much more unreadable.


This is not my experience at all. In fact, my experience is where C++ is used in GCC it became harder to read. Note that GCC was written in C and then introduced C++ features later so this is not hypothetical.

In general, I think clean C code is easier to read than C++ due to much less complexity and not having language features that make it more difficult to understand by hiding crucial information from the reader (overloading, references, templates, auto, ..).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: