Hacker News new | past | comments | ask | show | jobs | submit login
What Gödel Discovered (stopa.io)
1000 points by stopachka on Nov 16, 2020 | hide | past | favorite | 259 comments



I’m a mathematician who spent many hours thinking about Goedel theorem and adjacent topics. The author in a self-deprecating way says that he’s not a mathematician, but just a programmer. Bear no mind to that: this is one of the best popular expositions of Goedel theorem I’ve seen. Everything is very accurately explained, there are no silly mistakes and untruths one often sees mentioned in context of Goedel theorem, and even the original proof is sketched in a very approachable manner, striking a very good balance between getting across all crucial ideas, and skipping most of the distracting technical aspects (which are important and interesting, but I think only to mathematicians, haha). Great job all around.


Dear xyzzy,

Unfortunately, you missed that the author's proof does not actually prove inferential undecidability (sometimes called "inferential incompleteness") of Russell's Principia Mathematica for the following reason:

     The [Gödel 1931] proposition *I'mUnprovable* does **not** exist in Principia Mathematica because it violates restrictions on orders of propositions that are necessary to avoid paradoxes (such as Russell's Paradox). Gödel numbers (and in the author's case Lisp expressions) leave out the order of a proposition with the consequence that the Diagonal Lemma **cannot** be used to construct the proposition *I'mUnprovable*.
Furthermore, existence of the proposition I'mUnprovable contradicts the following fundamental theorem of provability that goes all the way back to Euclid:

     A theorem can be used in other proofs.
See the following article for further details:

https://papers.ssrn.com/abstract=3603021


Is case anyone is wondering this is MIT professor Emeritus Carl Hewitt

https://en.wikipedia.org/wiki/Carl_Hewitt


A whole bunch of previous posts have be reposted to this discussion (maybe by a bot?).

Instead of plowing through the disjointed repostings,

readers may be better off looking at the articles linked in https://professorhewitt.blogspot.com/.

Also, there is a video here:

https://www.youtube.com/watch?v=PJ4X0l2298k


Wikipedia is contentious and often potentially libelous.

See the following for more up-to-date information:

https://professorhewitt.blogspot.com/


I'm going to need someone to come along and write up a layman's summary of this article. :) Does it change the end result of Gödel? Or is it basically the equivalent of saying what some restricted programming languages are doing, that we can get "good enough" results and not have these problems by restricting what we can do in the language?


Hi GreatQuux!

The article linked below explains why [Gödel 1931] did not prove inferential undecidability of Russell's Principia Mathematica and likewise why the formalization of [Gödel 1931] in Lisp proof being discussed is also invalid:

https://papers.ssrn.com/abstract=3603021

However, the article linked above does have a correct proof of inferential undecidability (also known as "inferential incompleteness").

Would be happy to respond to any questions that you might have.


What do you mean by "inferential" and why are there restrictions on what statements are valid?


Hi Mike!

The word "inferential" has to do with being able to be logically inferred, that is, deduced.

Russell's Principia Mathematica specified that each proposition must have an order to block paradoxes such as Russell's paradox.

See the article linked above for further explanation.


That doesn't explain why there must be such a restriction.


So in layman's terms you're saying the system the author tries to prove incomplete by definition removed the parts that would prove it's inconsistent?


Secondly, in the PM-Lisp it doesn't necessarily prove that theorem a proves b, it just shows that b can be a successor of the formulas in a


Hi Ethn!

Existence of the [Gödel 1931] proposition I'mUnprovable is inconsistent with the following theorem to the effect that theorems can be used in proofs:

      ⊢∀[Proposition Ψ] (⊢Ψ)⇒Ψ


Axioms can be used in proofs, and they are not provable.

If a proposition is not provable, yet raises no issues if regarded as true, then it can be added as an axiom and used in theorems.


It turns out that for powerful theories of Computer Science,

there must exist infinitely many propositions that are

inferentially undecidable, that is, can be neither proved nor disproved.

However, the propositions cannot be specified constructively

and so are not very interesting.

Currently, there seem to be no propositions interesting to

practical Computer Science that are provably inferentially

undecidable.


For a statement so short it would be easy to avoid using jargon...


The following theorem says that every theorem can be used in another proof:

         ⊢∀[Proposition Ψ] (⊢Ψ)⇒Ψ
The above theorem is completely standard mathematical notation.


What character set or input method is used in creating this mathmatical characters?


Just to be clear

      ⊢∀[Proposition Ψ] (⊢Ψ)⇒Ψ
is not jargon. Instead, it is standard mathematics.


It absolutely is jargon. The fact that you explained it by saying it's "mathematics" instead of "English" suggests as much. Jargon means that it's technical language specific to a field and not used in everyday vernacular. What you wrote can't even be typed on a standard keyboard. It's jargon.

I'm pointing it out because its use is not so innocent. Jargon is often used to obfuscate meaning, especially when it means something that would otherwise be easy to say plainly.


This brought a big smile to read, thank you :)


There is an extraordinarily minor typo: “PM-LIsp”

I point it out only because your work here is so good that it feels wrong not to smooth out any small splinters.

Thank you for making this!


Oi, thanks for the kind words and the catch! Updated (should take a few minutes to show up)


I think I maybe noticed another typo... The last two axioms don't have matching parentheses? It looks like one more is opened than closed if I counted right.


What an excellent recommendation to the article. Thank you!


I thought I replied to this post, but I guess not and my reply ended up being a top-level reply, which is just as well. I will just mention my main two quibbles to this otherwise excellent post and others like it so that other people who embark on introductory posts to Godel's results don't fall into the same trap.

1. Please don't bring the notion of truth into an introductory explanation (such as in the section "Power of Numbers" or in the short intro to "Hilbert's Program"). It makes it very confusing, especially in light of Godel's other landmark result, i.e Godel's Completeness Theorem which simultaneously applies to many of the same logical systems and can be very very vaguely described as "all true things are provable." Just stick to incompleteness and consistency at a syntactic level instead of appealing to truth, that is respectively the ability to always add new axioms and the lack of syntactic contradictions.

2. The lack of a system S's ability to prove its own consistency is not interesting by itself since we would not trust a proof from a potentially inconsistent system. Hence it is not, in a sense, terribly interesting that we require an infinite tower of increasingly stronger systems to prove the consistency of S and themselves. Rather the interesting direction goes the other way. S cannot prove the consistency of stronger systems, which means we cannot have a "minimal" system we use to explore the consistency of all other systems (again subject to some nuance).


> I thought I replied to this post, but I guess not and my reply ended up being a top-level reply, which is just as well.

Sorry for the confusion. We detach comments that are replies to the top comment but don't respond to anything specific that that comment said. This is standard moderation. Sometimes we post that we did this (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...), but not always.

We do this because users often post replies to the top comment on a page that aren't really replies. Sometimes they do that because they want to get their comment higher on the page (so-called topjacking). Of course that's not always the case.

Both of your posts are fine (except for the fact that they duplicate one another) but they are replies to the article, not to anything that the top comment (https://news.ycombinator.com/item?id=25116881) specifically says. Indeed your sentence "I will just mention my main two quibbles to this otherwise excellent post and others like it so that other people who embark on introductory posts to Godel's results don't fall into the same trap." makes it quite clear that your reply is to the article, not xyzzyz's comment. That's great! But the appropriate level for such a comment is the top level of the thread.

I've therefore done the same again and detached this subthread from https://news.ycombinator.com/item?id=25116881.


Ah fair enough. I meant it to be a reply to temper xyzzyz's praise of the article with what I perceived to be minor flaws in it, but such is as it is.


"I will just mention my main two quibbles"

--

Poking through your history, why do you quibble on the word truth so often? Do we not agree there are statements that are true? Do we not agree there are provable true statements ? If there are unprovable statements in any consistent set of axioms, might we also conclude there also be an unprovable but true statements?


It's because it's a huge can of worms that leads to really grandiose claims that aren't supported by Godel's statements, of the sort the article is already beginning to make.

It also shifts the conversation into becoming fundamentally a philosophical question rather than a logical or mathematical one, which is okay, but considerably changes the table stakes of what background knowledge we need.

> If there are unprovable statements in any consistent set of axioms, might we also conclude there also be an unprovable but true statements?

That is one potential philosophical position. Another philosophical position is that there are no "true" axioms schemes, but rather that axiom schemes are always indefinitely extendable. Truth lies instead in how we choose to apply the axiom schema to the external world, which is outside the purview of Godel's Incompleteness Theorem. Both positions are supported by Godel's Incompleteness Theorem.

For example, do you believe that the Continuum Hypothesis (CH) is true? That is, although it is independent of ZFC, do you believe that objectively we are only allowed to use CH as an axiom or Not(CH) as an axiom. One of those must be objectively wrong. Some people do some people don't.

Likewise, do you believe that the imaginary numbers are "true?" That is the axioms that make them up are not artificial statements that somebody can up in pursuit of an abstract theory, but are immutable truths of the universe and that in fact you cannot choose an alternative axiomatization because it would be "wrong?" Some people some people don't.

What about the natural numbers?

There are many philosophical positions you can take on the notion of mathematical truth and the crucial thing is describing Godel's theorems with reference to truth suggests that Godel's theorems help decide among these notions of mathematical truth, when in fact what happens is that Godel's theorems stretch and shrink to accommodate different philosophical frameworks. The former is why I think a lot of people accord Godel's theorems a mystical status that makes it very difficult to have coherent discussions about them.

To pedagogically illustrate why the notion of truth can be confusing, I'll repeat two questions I've listed elsewhere which I think are good things to meditate on for first-time readers of Godel's Incompleteness theorems and explore why the often grandiose philosophical proclamations arising from Godel's theorems require very careful footing.

I answer them in my post history, but I would recommend you don't look at those before you've come up with answers yourself.

------Repost-----

First, Conway's Game of Life:

Conway's Game of Life seem like they should be subject to Godel's incompleteness theorems. It is after all powerful enough to be Turing-complete.

Yet its rules seem clearly complete (they unambiguously specify how to implement the Game of Life enough that different implementation of the Game of Life agree with each other).

So what part of Game of Life is incomplete? What new rule (i.e. axiom) can you add to Conway's Game of Life that is independent of its current rules? Given that, what does it mean when I say that "its rules seem clearly complete?" Is there a way of capturing that notion? And if there isn't, why haven't different implementations of the game diverged? If you don't think that the Game of Life should be subject to Godel's Incompleteness Theorems why? Given that it's Turing complete it seems obviously as powerful as any other system.

Second, again, in most logical systems, another way of stating that consistency is unproveable is that consistency of a system S is independent of the axioms of that system. However, that means that the addition of a new axiom asserting that S is either consistent or inconsistent are both consistent with S. In particular, the new system S' that consists of the axioms of S with the new axiom "S is inconsistent" is consistent if S is consistent.

What gives? Do we have some weird "superposition" of consistency and inconsistency?

Hints (don't read them until you've given these questions some thought!):

1. Consider questions of the form "eventually" or "never." Can those be turned into axioms? If you decide instead to tackle the question of applicability of the incompleteness theorems, what is the domain of discourse when I say "clearly complete?" What exactly is under consideration?

2. Consider carefully what Godel's arithmetization of proofs gives you. What does Godel's scheme actually give you when it says it's "found" a contradiction? Does this comport with what you would normally agree with? An equivalent way of phrasing this hint, is what is the actual statement in Godel's arithmetization scheme created when we informally say "S is inconsistent?"

At the end of the day, the philosophical implications of Godel's incompleteness theorems hinge on whether you believe that it is possible to unambiguously specify what the entirety of the natural numbers are and whether they exist as a separate entity (i.e. does "infinity" exist in a real sense? Is there a truly absolute standard model of the natural numbers?).


dwohnitmok, I think you make a good point about Completeness missing out on the action in OP's post, which is nevertheless a good post and better than popular treatments. In fact, I remember thinking in class, "What, there is Completeness theorems as well as Incompleteness?" This is due to a similar phenomenon as we see with quantum mechanics in popular writing, where mostly the sensational, which aren't really sensational, is written.

In any case, my contribution here is something else: The most interesting part of the natural numbers object and its role in incomplete logical systems is to me the more humble notion of... unique prime factorisation.

If that had failed, so too would the Godel numbering system. I am not a number theorist, but this to me would be an interesting spin off from OP's post. The divisibility lattice is a good starting point, but apart from such basic constructions, the actual proof of unique prime factorisation to me is more historic than informative, and steps such as Bezout's Identity are just as "mythical" to me as Godel's Incompleteness Theorem, if we are using bad word choices. Conversely, numbers are just permutations on prime generators, but crucially, usually we start with addition before we start with multiplication.


> Truth lies instead in how we choose to apply the axiom schema to the external world

Is this true?


Oof... what's the term for being sunk by the very thing that you're warning against?

I was using a very hand-wavy version of the word "truth" in my reply, trying to frame it in terms of how people talk about capital T Truth in informal philosophy rather than, say, model satisfability in model theory (which completely side-steps this question by assuming its resolution).

What I meant was to express a hypothetical anti-Platonist position. One version of mathematical Platonism is a belief that only some subset of "plausible" (i.e. those expressed by a consistent set of axioms) mathematical entities are real and it is the job of logician-philosophers to feel out just what those are. Of course they must appeal to extra-mathematical and extra-logical principles to do so, but that is after all why Platonism is a philosophy rather than a branch of mathematics. This is a simplified version of the philosophy that underlies some efforts to find "the one true set theory" that extends ZFC.

A hypothetical brand of anti-Platonism could argue that every consistent set of axioms is similarly real or unreal. There is no reason to choose one over the other. The only thing that has any objective reality to it is the mapping of those axioms to the real world. That you can do either correctly or incorrectly. Hence if we're talking about capital T Truth (i.e. the reality of the world) it is captured in that mapping, rather than the axioms themselves. Therefore, e.g. there's no point to trying to divine "the one true set theory." Just use whatever you find useful.

Regardless my overall point is that these are all matters of philosophy that cloud the particulars of what is going on with Godel's incompleteness theorems, often by exaggerating their consequences.


> often by exaggerating their consequences.

So you assert, many think differently.

Edit: To clarify, are you asserting that they do /not/ extend into philosophy? What exaggerations are you specifically referring to?


Godel's theorems have philosophical ramifications, but then again so do basically all theorems given the right framing.

It is nonetheless true that Godel's theorems are a particular focal point of discussion around mathematical philosophy (but again one which has surprisingly few ramifications for mathematics as a whole) and a very fruitful one at that. I'm not dismissing the idea of having a philosophical discussion around it, only cautioning that it requires a deep understanding of all the seemingly paradoxical statements that it presents which defy attempts to concisely state its philosophical ramifications.

My point at the beginning of all of this is that I disagree with the pedagogical choice of a lot of introductory posts to the incompleteness theorems to mix in that philosophical content at the beginning. You can always do it later after first understanding a lot of seemingly paradoxical statements that arise from Godel's theorems (such as the ones I offered concerning Conway's Game of Life and the consistent system that proclaims the inconsistency of its subpart).

However, by presenting the philosophical parts up front, that tends to be the thing that people latch onto rather than the inner workings of the theorems (which are extremely important to understand to make sure subsequent philosophical conclusions are coherent), and they latch onto the everyday, woolly meanings of the words "prove" and "truth" rather than their more precise counterparts in the theorems.

The exaggeration in the article I'm referring to is

> For example, it [Godel's incompleteness theorem] may mean that we can’t write an algorithm that can think like a dog...but perhaps we don’t need to.

To address this specific point, Godel's incompleteness theorems have little to say on this point. I think the assumed chain of logic is "algorithms rely on a computable listing of axioms" -> "any such listing must have truths that are unprovable in it" (the Godel step) -> "dogs perceive truth" -> "there are certain things dogs perceive that cannot ever be derived by an algorithm."

But leaving aside the plausibility of the non-Godelian steps in that chain, it's relying on a subtle simplification of what "truth" means that severely complicates the chain. Again, I would highly recommend thinking over the Conway Game of Life and inconsistent consistent theory problems. Those two problems demonstrate that "any such listing must have truths that are unprovable in it" is often an over-simplification (okay so I have a system S and the sentence "S is inconsistent" is consistent with S if S is consistent. What is true here?)

This is very similar (and indeed the similarity runs deeper due to connections between computability and logic) to analogous arguments with the halting problem/Rice's Theorem.

"Rice's theorem says that we cannot prove any abstract property about any program" -> "Static analysis attempts to prove abstract properties about programs" -> "Static analysis is impossible."

Or similar arguments that use the halting problem to argue that Strong AI is impossible.

More broadly though this sort of "anti-mechanistic" viewpoint appears all over the place in discussions about Godel's theorems to the point that the Stanford Encyclopedia has a whole section dedicated to it that plainly states "These Gödelian anti-mechanist arguments are, however, problematic, and there is wide consensus that they fail." https://plato.stanford.edu/entries/goedel-incompleteness/#Gd...

Having seen some of those arguments I am very inclined to agree with the encyclopedia here.


Thank you for your clarity. This discussion chain may have kicked of an existential crisis. I think you might have exposed me to a Lovecraftian horror. The article and your discussion have been, if not mind altering, very much mind expanding.

I can't say I've followed all of the nuance in your (argument?) comments. I do know I'm going to be chewing on this for a while. I've come to grips with horror of saccades, fascinating to learn about new ways my mind, uh, decides what is true.

In any case, I appreciate your long thoughtful responses. May not mean much to you, but for at least one reader, it means a lot to me.


Thanks for the reply.

> Godel's theorems have philosophical ramifications, but then again so do basically all theorems given the right framing.

So not any more or less than any other theory?

> and there is wide consensus that they fail.

I think this consensus has more to do with presuppositions than logic considering the potential implications of Godel's theorem.


Thank you, reading this comment and some of yours further down, thinking about the topics you set out, was very interesting.

To offer possibly a comment to yours about the notion of "truth", in the article posted, when I read "true" (e.g. in the 1st paragraph of chapter Hilbert's Program) and next to it "false", I immediately made reference in my mind to the true/false duality of programmers. I think that helped in understanding the article, but coupled with a footnote with your details would certainly help the reader think further on the topic.

cheers :-)


Well I'm glad it helped!


Model Theory formalizes the notion of "truth" in foundational

mathematics of computer science. The axioms of powerful

foundational theories of computer science have just one model

up to a unique isomorphism. See

https://papers.ssrn.com/abstract=3418003 and https://papers.ssrn.com/abstract=3457802

    The [Gödel 1930] "completeness theorem" is *false* for the
    foundational theories of Computer Science because these
    theories are inferentially incomplete, that is, not every
    proposition can be proved or disproved.


Dear dwohnitmok,

Actually, powerful strongly-typed theories used in Computer Science can prove their own consistency!

See the following article for for how this is done:

https://papers.ssrn.com/abstract=3603021

As you pointed out, such a proof does not mean that it is not operationally possible to derive a contradiction using axioms and rules of inference.

Instead our confidence in the consistency of a powerful strongly-typed theory is because there is just one model of the axioms of the theory up to a unique isomorphism, which formalizes the concept of "truth" that you mentioned in your post.

For example, see the following article:

https://papers.ssrn.com/abstract=3457802


I looked up your papers... Then I noticed you have also worked on paraconsistent logic. So to just boldly go on a completely new tangent and ask a question I have long sought to ask: do you know a good introduction text for paraconsistent logic, for a graduate level computer science student?


Hello Unknown_apostle,

Thanks for looking at the articles!

The most up to date analysis of paraconsistent logic starts on page 11 of the following article:

https://papers.ssrn.com/abstract=3603021


> It makes it very confusing, especially in light of Godel's other landmark result, i.e Godel's Completeness Theorem which simultaneously applies to many of the same logical systems and can be very very vaguely described as "all true things are provable."

If people want to understand how to reconcile this very vague statement with Goedel incompleteness theorem that appears to say that some "true" statements are in fact not provable, here's the rough idea: we distinguish between "theories", which live in syntactic world, and "models", which are meant to represent semantics. Theories are like specification of an interface of a programing module, and models are actual implementation of said modules. If one can implement an interface to satisfy all requirements of the specification (that is, all the axioms and theorem that the axioms entail), existence of such implementation shows that the specification is self-consistent: in math, we say that theories that have a model are consistent. What Goedel's completeness theorem says is the converse: consistent theories always have a model; in programming talk, non-contradictory specification can always be implemented.

Here's why it means that "all true things are provable": if something is supposed to be true fact about the theory, it has be a property of every model of said theory, that's the lowest bar for any notion of "truth" to make sense. But, one can easily show, using Goedel's completeness theorem, that if something is true in every model of the theory, it has to be provable: indeed, suppose that statement f is true in every model of theory T, but is actually not provable from T. We can then extend theory T by statement "not f", and the resulting theory T' = {T, "not f"} is still consistent, because for it to be inconsistent means exactly that T could prove both "not f" and "not (not f)", and the latter is equivalent f, which we assumed T cannot prove. Then, Goedel's completeness theorem says that this extended theory, since it's consistent, must have a model, M'. But, a model of this extend theory T' is also a model of basic theory T; if you implement larger specification, your implementation also implements any subset of it. But f was supposed to be true in every model of T, and so f is true in M', but also "not f" is true in M', which is a contradiction. Hence, f must be provable from T.

Why it doesn't conflict with Goedel's in-completeness theorem? After all, the Goedel's sentence constructed in the proof of Goedel's incompleteness theorem is supposed to express something true but not provable. Well, the idea is that it is not true of the theory of natural numbers, but it's true of the particular model of natural numbers that we have in mind when we think of what natural numbers are, the so called standard model of natural numbers, where all of the numbers are of the form 0 and, using OPs notation (next (next (next ... (next 0) ... )). As it turns out, there exist other models of natural numbers, which have extra, "non-standard" natural numbers which are not of that form, numbers you cannot reach by just applying successor operation finite number of times starting from 0. These models are really weird, but they still must abide all provable rules of the behavior of natural numbers. One can still add them, there must be some "prime" non-standard numbers that aren't products of any smaller numbers, the non-standard numbers must also split into unique product of prime numbers (some of which must necessarily also be non-standard), and so on. This is all crazy, and can make your mind twist, so only think about it at your own peril.

To sum up, the notion of "truth" with respect to theories coincides with provability, but not so when you start talking about things true in particular models. This means that we cannot describe all true things about specific models, and in particular about standard natural numbers, using standard logic, but if something does follow from the theory and is true in every of its models, it is in fact provable.


I am trying to follow, but I feel I am hanging

Goedel incompleteness theorem -> apply this to "theories"

Goedel completeness theorem -> apply this to a specific "model", of the theory

So there can be "something true but not provable" in the theory, and "we cannot describe all true things about specific models", but ....what?


The answer you got from lmm and others is wrong (I am a professional mathematician and did research in logic).

The completeness theorem says simply: if T is a first-order theory (list of axioms in first-order logic), any sentence true in every model of T is provable by logical deduction in T.

The first incompleteness theorem says: if T is a consistent recursively enumerable theory that can contains a sufficient amount of Peano arithmetic, then there exists a sentence which is neither true in all models nor false in all models. In other words, there is a sentence which is true in at least one model of T, and false in another.

The second incompleteness theorem says that if T can interpret Peano arithmetic, then we cannot prove the consistency of T within T.

So a tl;dr:

-Completeness: any SENTENCE true in ALL models is PROVABLE - applies to all first order theories

-1st Incompleteness: there EXISTS sentences which are TRUE in some models, FALSE in others. - Applies to theories that contain enough arithmetic

-2nd Incompleteness: if a sufficiently strong system is CONSISTENT, we CANNOT PROVE that CONSISTENCY within the system.

NB: of course, if you have a sufficiently WEAK system, like the axioms of group theory together with "FOR ALL x FOR ALL y (x=y)", then that theory would be COMPLETE and Godel's incompleteness theorem does not apply here.


Ah... this also has nuances. Although perhaps the unspoken assumption of your post is that you are trying to explain the incompleteness theorems through the lens of first-order logic? In which case your definitions are completely fine.

But just to be clear, your summarization of the first incompleteness theorem is through the lens of model theory and therefore predicated on the completeness theorem and doesn't hold in its absence. The canonical example is second-order PA under full second-order semantics, which has only one model (this is also true of second-order ZFC). Hence there do not exist sentences of second-order PA which are true in some models and false in others.

To fully flesh out the syntactic results of Godel's incompleteness theorems and their semantic consequences when Godel's completeness theorem holds, there's the following syntactic -> semantic chains.

1st incompleteness: there exist sentences which cannot be proved within any theory containing enough arithmetic. Therefore by Godel's completeness theorem there are sentences in the theory which are satisfied by some models and whose negation are satisfied by other models.

2nd incompleteness: the sentence Con(T) corresponding to the informal sentence "the theory T is consistent" cannot be proved within the theory T itself assuming T contains enough arithmetic. Therefore by Godel's completeness theorem there are models of T where Con(T) is satisfied and models of T where Not(Con(T)) is satisfied.

As I said in another post while Godel's completeness theorem is an essentially model-theoretic result (and it is the foundation that all other model theory builds on), Godel's incompleteness theorems are fundamentally syntactic results that live in the realm of proof theory, rather than semantic ones that live in the realm of model theory, although they have deep implications for the latter.

(Also to be extraordinarily pedantic, as you point out elsewhere, a sentence is true if it is satisfied by all models. Hence technically it is a misnomer to say that a sentence is true in a given model. Rather a sentence is satisfied by a given model. But it's a common enough bit of informal terminology, but I just want to point it out for people who aren't familiar with model theory so that they don't get tripped up trying to square different formal definitions of "true.")


The way I see it is that both theorems are statements about the relationship between a theory and models of that theory.

Goedel's completeness theorem tells you that for a theory in first-order logic, there's a model that corresponds exactly to that theory: anything that's true in that model can be proven in the theory, and vice versa.

Godel's incompleteness theorem tells you that there's no such model for any system that includes the usual "full" version of arithmetic. A model is a very concrete thing, so any question you ask about the model has an answer - but you don't necessarily know whether that answer is a fact about the theory (i.e. true in all models of the theory - which any statement that's provable would be) or just a fact about that particular model.

It's like if you write some C code, run it, and it prints 4. Does that mean your program always prints 4 according to the C standard? Or did your program do some undefined behaviour, and it just happened that with your particular compiler it printed 4?


> there's a model that corresponds exactly to that theory: anything that's true in that model can be proven in the theory, and vice versa.

Unfortunately it doesn't work if you swap our "for all models" with "there exists a model."

Take the theory of the natural numbers with the new symbol "Special" and the axiom "there exists a number n such that Special(n) holds." In every model of that theory there will be a concrete n that is Special (say 6). However, the theory will never be able to prove that a specific n is Special, precisely because it is under-specified which n it is.

> Godel's incompleteness theorem tells you that there's no such model for any system that includes the usual "full" version of arithmetic.

No in conjunction with Godel's completeness theorem, the incompleteness theorem would say rather that there is no theory that is strictly only the "full" version of arithmetic. Any theory will always be too broad and have too many models that satisfy it, only one of which is the "usual" model of arithmetic.

EDIT: Ah I see I misinterpreted your statement the first time, around, but my objections still hold with modification. Change "the theory of the natural numbers" to a far weaker theory, e.g. the theory of Presburger Arithmetic, and everything else still holds.


> Take the theory of the natural numbers with the new symbol "Special" and the axiom "there exists a number n such that Special(n) holds." In every model of that theory there will be a concrete n that is Special (say 6). However, the theory will never be able to prove that a specific n is Special, precisely because it is under-specified which n it is.

Hmm. So which n is special in the model that you get from the model existence theorem? Doesn't it all get somehow equivalence-classed away?

> No in conjunction with Godel's completeness theorem, the incompleteness theorem would say rather that there is no theory that is strictly only the "full" version of arithmetic. Any theory will always be too broad and have too many models that satisfy it, only one of which is the "usual" model of arithmetic.

Sure. The point is that there's always a gap between the theory and the model, an "implementation-defined" part of the model.


> Hmm. So which n is special in the model that you get from the model existence theorem? Doesn't it all get somehow equivalence-classed away?

The model existence theorem relies on extending your original language (and therefore your theory) with a bunch of new constants. So in this case you would end up conjuring up a specific new constant symbol just for `Special`.

You then "forget" about all those constants when you retract back to your original theory and so you lose the one-to-one relationship.

So it is true that if your original theory is Henkin (i.e. there is a constant symbol witnessing every single existential statement you have) and maximally consistent, then there exists a model that coincides exactly with your theory, in particular the model that is constructed from all your constant symbols modulo equality in your theory.

But most theories are not Henkin. And moreover most theories are not maximally consistent (since that would imply completeness in the sense of Godel's incompleteness theorems). In particular the theory you get out of the model existence theorem is usually not computably enumerable, so you can't actually write down your extended theory.

So most theories do not have that nice property that they correspond exactly with the semantic properties of a model.


Just want to point out that your answer is essentially incorrect. Godel's completeness theorem has nothing to do with the existence of models. It is about the provability of sentences that are true within all models.

Also, the incompleteness theorem doesn't say anything about the nonexistence of a model. It gives the existence of at least one sentence X (for sufficiently nice theories that include enough arithmetic) such that there ARE two models where X is true in one and false in the other.


> Godel's completeness theorem has nothing to do with the existence of models.

I wouldn't say that it has "nothing to do" with existence of models. It wasn't the original Goedel's formulation of the theorem, but these days, one of the most, if not the most popular statement of it is to say that "if theory T is consistent, there exists a model of it".


Another example of why I don't like the word "truth" because I think it does more to muddy the situation than clarify it.

Let's drop "truth" for a sec.

Let's say I come up with a series of axioms about cows. They describe spots on cows, how many legs cows have, etc.

Now I come up with a sentence "all cows have spots." There are two ways of proving this sentence. One is a "semantic proof." That is I go and round up every cow and examine it and make sure they all have spots. Another way of proving this sentence is a "syntactic proof." I list out a series of axioms defining what cows are. Then I do a series of string manipulations on my axioms about cows according to an allowed ruleset to arrive at the statement "all cows have spots."

Semantic proofs are hard. They're annoying for finite groups of things and impossible to do for infinite groups of things. If possible we would like to use syntactic proofs. But syntactic proofs face two hurdles. The first is how to know we've applied our axioms correctly. For example, maybe one of my axiom is "all cows have 4 legs," and then I just conveniently define the symbol "4" to mean three. You can't really get around that problem. You kind of just have to tell people, don't mess around with the meaning of words.

The second hurdle though concerns our ruleset. That is string manipulations can be of an arbitrary form. How do I know, even if I've applied my axioms correctly, that my ruleset isn't crazy? Something like, anytime you see a sentence S, you're allowed to replace it with Not(S) if you want.

Godel's completeness theorem says for a lot of reasonable rulesets (including most of the ones we use for mathematics), semantic proofs and syntactic proofs coincide. You can decide to either go out and look at every cow or you can use your axioms about cows to move symbols on a sheet of paper around. As long as your axioms about cows capture enough information about whether cows have spots, with certain popular rulesets, you'll get the same result.

Put another way, as long as the result you're looking for is relevant to the axioms you've listed, either a semantic proof or a syntactic proof is valid.

Okay, but what about statements that aren't relevant to the axioms I've listed? Things like the weight of a cow? Or what about statements that are under-specified? For example if I just say as an axiom "some cows have spots," does that mean all cows have spots or that some don't?

Through the lens of Godel's completeness theorem this just means that our list of axioms is applicable to a lot more situations than we're giving it credit for. Our axioms don't talk about the weight of a cow? Well then they could be used for situations where cows are weightless or when they have weight. Our axioms don't specify whether all cows have spots? Well they can be used in situations where all cows have spots as well as situations where some cows don't have spots.

No biggie. That just means the only things relevant to our axioms are those properties in common across all the situations we could apply our axioms. And anything that can be semantically proved in all of these situations is also something that can be syntactically proved.

Okay so that concludes Godel's completeness theorem. Let's start going into his incompleteness theorems through the same lens.

Cows are really complicated things. They have tons and tons of different properties, so it's no surprise that our list of axioms will probably always under-specify them.

What if we strip things down to really really simple things. For example the natural numbers? Can we have a list of axioms that apply to the natural numbers and pin down every property about them?

The answer is no. Any list of axioms we come up with will always be "too broad." There are things other than the usual natural numbers (that's already a bit of a minefield to privilege one of them as being the "usual", but we'll ignore it for now) that I can present to you that also fulfill those axioms. In the same way that "some cows have spots" is mum on the issue of whether a certain cow doesn't have spots, and so can be used to describe either a herd of cows that all have spots or a herd of cows split half-and-half with spots and without spots, any system of logic that simultaneously fulfills both Godel's completeness theorem and incompleteness theorem means that it can be applied to more than one unique situation. Moreover those things that the system cannot syntactically prove are exactly those things that have different semantic proofs in different situations. For example, because I can't have a series of string manipulation steps that result in the sentence "all cows have spots" (Godel's incompleteness thereoms) there's gotta be some situations where all my axioms apply and all cows have spots and other situations where at least one cow doesn't have a spot (Godel's completeness theorem).

Hence through this lens Godel's incompleteness theorems aren't really about capital-T Truth so much as they are about our ability to completely specify infinite objects. If you and a friend decide to start listing out axioms to try to describe something like the natural numbers, no matter how many axioms you list out there is always something left unspecified.


"True" in this context is a technical definition, not some hand-waving thing. A sentence X being true means every model satisfies X. Provable means logical deduction within the theory proves X. So actually, Godel's theorems are exactly about truth and provability.


Yes if you appeal to model theory you're on firm ground with truth. But that's not how most informal introductions to the incompleteness theorems go (and indeed not how this article went about it).

More to the point, Godel's incompleteness theorems are theorems of proof theory, not model theory. Their proofs can be carried out with no recourse to models at all. And I think they are far less prone to being misunderstood when carried out without recourse to models and by extension without recourse to semantic truth.

You can of course choose to interpret them in model theory if you're working in a semantically complete logical system such as first-order logic with the usual semantics or second-order logic under Henkin semantics, and it is often illuminating to do so once you have those fundamentals. But I think in an informal setting those will tend only to confuse rather than clarify. And importantly the incompleteness theorems still hold in contexts where you don't really have a meaningful model theory to speak of.


People act like Godel’s Incompleteness Theorem is all about how formal systems are limited in their level of “insight about truth”. But actually it’s just the same observation as the Halting Problem: you have a system that tries to reason about the behavior of other systems (like Turing Machines or axiomatic set theory), but it can’t possibly always introspect about its own behavior because you can also configure it to behave differently based on its own conclusion about its behavior.

What if I ask you to predict whether you’re going to go left or right, but I also tell you that I’m going to force you to go the opposite direction as what you predict? Then the set of things you can accurately predict is Incomplete. But that’s not a profound limitation of human consciousness, it’s just the familiar paradox of any fortune teller or time travel scenarios.


While this is true, it is surprising that is it impossible to engineer a system to be complete and consistent about arithmetic, as opposed to systems which are not engineered well, like your 2nd paragraph.

If I take "1 is even", "1 is odd" and "no number can be even and odd" as my axioms, then there is obviously a problem, but of my own doing.


Russel and Whitehead's system is well-engineered; that's why it doesn't prove a contradiction (presumably) while your badly engineered system does. The issue is that R&W made their system powerful enough that it's also incomplete, due to the fact that their system contains smartass statements like G that are hell-bent on creating paradoxes if given the chance.

Before Godel's time, people just didn't have enough experience with computers to realize that you have to deal with code injection attacks every time you try to build a powerful platform of any kind. In this case, Godel Numbering is the hack that allows code injection into a formal system that's supposed to just be highly insightful about properties of the infinite world of natural numbers.


Russell's Principia Mathematica (PM) is indeed better

engineered than its predecessor by Frege because it has orders

on propositions. Because of orders on proportions, PM does

not allow the [Gödel 1931] proposition I'mUnprovable.

Furthermore, adding the proposition I'mUnprovable would

make PM inconsistent.

    The Gödel number of a proposition in PM is itself
    "incomplete" because it *doesn't* include the order of the
    proposition.  Allowing its Gödel number to represent a
    proposition is indeed a kind of "code injection" attack,
    which if allowed would make PM inconsistent.


Because of the halting issue you cannot summarise reality into a set of axioms and rules. Which is the original purpose of mathematics. Therefore you cannot say mathematics is right or true without adding at the end: "this actually could all be wrong".


> Because of the halting issue you cannot summarise reality into a set of axioms and rules

Conway's Game of Life is a kind of reality summarized into axioms and rules, and most likely our universe/multiverse is on track to being similarly summarized. The halting issue doesn't get in the way of that achievement...

Once you have the intuition for why the Halting Problem (i.e. fortune teller paradox) is obvious, then Godel's proof is just an XSS attack on axiom systems whose designers think they're only "about numbers", like how CSS is designed to be about styling but it's also Turing-complete and vulnerable to XSS.

Russel and Whitehead were trying to simultaneously (1) invent a system capable of proving highly insightful claims about the infinite space of numbers, like "there doesn't exist a number between 1 and infinity with property P" but (2) not a system that can encode a Turing-equivalent agent that creates paradoxes if it tries to predict its own future.

Godel was just the first to point out that condition (2) was already met just by supporting Peano Arithmetic, the same way a modern computer science undergrad can point out that CSS is Turing-complete or your regex-based HTML validator is vulnerable to XSS attacks.


CSS is not turing-complete - it is as turing-complete as a simple TXT file. Deciding whether some sequence of bytes resembles a CSS file is very possible. Deciding whether a stylesheet can be successfully applied to a website is also decidable. Of course, it is undecidable whether a certain pattern emerges if a stylesheet is applied to an infinitely large HTML file. But then, it is also undecidable whether some TXT file appears in an infinitely large HTML file.


It's Turing complete if you let a human feed a bunch of clicks to proceed along the computation steps. [1] It also allows embedded JS (in some browsers), which is Turing complete. My point is that languages really quickly surpass the Turing complete barrier if you let them do fancy things. Godel realized that proving interesting statements about unbounded natural numbers is quite a fancy thing.

[1] https://stackoverflow.com/questions/2497146/is-css-turing-co...


This is not vital to the point you’re making but XSS has nothing to do with CSS.


In this case the XSS vector is the fact that CSS allows running JavaScript in some browsers [1], but still, a lot of systems that are supposed to be weak and domain-specific are discovered to be Turing-complete, e.g. Conway's Game of Life.

[1] https://stackoverflow.com/questions/3607894/cross-site-scrip...


Dear Darlthus,

The predicate Halt<aType>[anExpression] is true if and only

if anExpression of type aType halts.

The predicate Halt is inferentially undecidable, that is, it

is not the case that for every expression anExpression of type aType that

|- Halt<aType>[anExpression] or |- ~Halt<aType>[anExpression].

Inferential undecidability does not mean that mathematics

is wrong.

Of course, it is possible to have incorrect

mathematical proofs, such as the incorrect proof in

[Gödel 1931] for the inferential undecidability of Russell's

Principia Mathematica. (There is a correct proof here:

https://papers.ssrn.com/abstract=3603021)


Thank you for the paper.

Of course proofs can be still correct, it's just that we cannot cope with infinites very well without talking about computational limits.

My argument was hyperbolic, I regret, but it was less about what's incorrect and more about what's incomplete. The aspirations of early mathematicians was to discover a platonic ideal. Instead of that we only got useful tools that are always* up for re-interpretation depending on context.

An example of this is that Euclid's fifth postulate, which be consistent with many other interpretations of geometry too, not just the single one originally intended. It turned into a tool of formal scaffolding instead of an omnipotent "truth". Going from absolute to relative in power.

*In cases where they involve infinites. Including simple expressions like "take N to be an integer"


at the end: maybe not even wrong


Hi Liron,

Your intuition is pretty good.

A correct proof of inferential incompleteness of Russell's

Principia Mathematica can be constructed using the

computational undecidability of the halting problem.

See https://papers.ssrn.com/abstract=3603021

   However, the [Gödel 1931] proof is incorrect for reasons
   mentioned elsewhere in this discussion.


This is really excellent. Amazing how much clearer the author’s Lisp code makes things (even to a non mathematician and non lisper). It’s rare to find explanations of difficult topics that are this well communicated, but I hope it becomes a trend.


Thank you for the kind words! :)


I agree and I want to add that I particularly appreciated how you consistently, in programming terms, communicated the type of things. Eg:

    (proves a b)
So far saying “the sequence with the Gödel Number a proves the formula with Gödel Number b”

In my experience, most mathematicians would have instead written "So far saying 'a proves b'". Repeating the type of the parameters consistently made it significantly easier for me to follow along. I hope this becomes more common.


Something I've always been curious about: does Gödel's theorem imply an infinity of inconsistent statements, or is the "this statement cannot be proven..." statement the only one? If it's the latter, then of what practical significance is the singularity? If a system is incomplete only in that regard, couldn't one redefine incompleteness to exclude it, and render the system complete for all practical purposes?


Others have explained why you can't wave away inconsistency (principle of explosion) nor incompleteness (adding new axioms just creates a new axiomatic system with its own Godel sentences). However you might also find it interesting what incompletenesses exist in our own mathematical system (ZFC) -- the most well-known example is the Continuum Hypothesis[1]:

> There is no set whose cardinality is strictly between that of the integers and the real numbers.

Or to put it another way, there exists no intermediate type of infinity between countable infinities (the set of integers) and uncountable infinities (the set of real numbers).

The CH is independent of ZFC -- both CH and its negation can be included as new axioms to ZFC and both versions are logically consistent if and only if ZFC is -- meaning that being able to prove the CH is an incompleteness in ZFC.

[1]: https://en.wikipedia.org/wiki/Continuum_hypothesis


> being able to prove the CH is an incompleteness in ZFC.

It's only incompleteness if the CH is true or false at the semantic level, "outside" of the logic system under discussion.

But the CH may be neither true or false, semantically, if the meaning of "existence of a set whose cardinality is strictly between that of the integers and the real numbers" strictly depends on the axioms and logic used to define sets and real numbers.


This response is not quite right. "It's only completeness [...]" sentence does not make sense.

CH is independent of ZFC, period, as proved by Cohen. Talking about 'semantic level' does not make sense.

CH is an example of the incompleteness of ZFC. There are models of ZFC in which CH is true and models in which CH is false.


Trying to understand logic. What is a model, and how does it differ from theory?

For specific example, what is a model of ZFC? Is it just another theory, one which includes ZFC and few more axioms? Why not call it a derived theory or a subset theory?


This explanation is somewhat informal, but I gets the point across: a theory is a set function symbols, relation symbols, and axioms governing them. A model is a specific set with an interpretation of those function and relation symbols that satisfies every axiom of the theory.

It's much easier to understand if we take an example. An example of a theory is the single sentence:

"There exists an X and there exists a Y such that X is not equal to Y."

(Of course typically in logic you would use logic symbols, but here I am writing out in an English sentence.)

Now, a model of this theory is the set {1,2}. Another model is the set {1,2,3}. More generally: any set with at least two elements is a model of that theory. The "function symbols" and "relation symbols" can be introduced in the language to talk about operations like addition and multiplication.

For example, the theory of groups uses the language of groups with a binary function symbol representing group multiplication. Any group (such as the integers with addition or invertible matrices with matrix multiplication) is a model of that theory.

So: theories are sets of axioms in some language, and models are sets together with actual functions/relations that satisfy those axioms.

Models of ZFC are a little bit counterintuitive. But they are single sets that interpret all the axioms of ZFC, rather than actual sets that we use in informal mathematics. Models of ZFC can be quite unusual because of the incompleteness theorem, and there are infinitely many models because of this (such as some in which CH is true, etc.).


Hm? I thought (in)completeness was just about whether or not , for each well-formed-formula, either there is a proof of it, or a proof of its negation.

The CH is a syntactically valid statement in ZFC.

So, shouldn't the fact that ZFC cannot prove or disprove CH, be an example of ZFC being incomplete, regardless of whether CH is in fact true, false, or not-a-proposition-that-has-a-truth-value ?


Ok. If we are being careful, there are different kinds of completeness and we should specify which one we're talking about.

Here's a good list: https://en.wikipedia.org/wiki/Completeness_(logic)#Forms_of_...

The existence of a proof within the logic system for every well-formed formula or its negation is "syntactical completeness".


Oh cool, I was (at least partially) wrong. (Or, I was missing something important at least.) I appreciate the correction/elaboration.


I thought that independent (undecidable) statements are totally different than the Gödel sentences which demonstrate incompleteness. The latter is a statement which is true in the axiomatic system but which cannot be proven using the axiomatic system. The former is just a statement that essentially has no truth value in the axiomatic system.


Your thoughts are approximately correct.

The Godel sentences are of a different character than the continuum hypothesis (CH) because the Godel sentences are simple first-order arithmetic statements, while the CH is a higher-order, a.k.a. analytic statement. A Godel sentence can be assigned a truth meaning via Tarski's definition of truth independent of the axiom system in a way that is much harder to do with the CH.

Basically a Godel sentence says something about whether a given piece of software terminates when run on an ideal computer (specifically a piece of software that hunts for a proof of a contradiction within a specified axiom system). I'll argue that whether a specific piece of software would halt or not when run on an ideal machine has a definite truth value independent of any axiom system. Whereas CH doesn't really afford such a software interpretation.

I do respect the fact that there exist models of PA + ~Con(PA) but these models are non-standard and we don't use such models to reason about software, specifically because they are unsound in this sense.


You're quite right that CH is not an example of a Godel sentence which demonstrates incompleteness (examples of that are given in the linked article) -- I didn't mean to imply otherwise. I just wanted to share that this was a more practical example of incompleteness in our modern mathematical framework (ZFC) which folks might find interesting.

But as others have said, both statements are equally unprovable in ZFC. The Godel sentences demonstrating incompleteness are constructed in such a way that you could argue (outside of the axiomatic system) they are provably true or false, while CH is a case where reasonable mathematicians may disagree on whether it is true or false. But ultimately there is no proof in ZFC for either, so they are both examples of incompleteness in ZFC.

And note that the Godel sentence demonstrating incompleteness doesn't need to be true -- the inverse of the Godel sentence demonstrating incompleteness is also unprovable.


Something being true in a axiomatic system is the same thing as it being provable; that's what "true" means. While a Godel statement for X can be interpreted as "X does not prove this statement", that interpretation inherently relies on the semantic implied by X. The Godel construction is systematic way of generating independent statements without needing to know anything specific about the axiomatic system.


Ah yes, you're right, and my lazy wording in the former comment is inaccurate. A Gödel sentence is just a statement written in the syntax of whatever formal system we're dealing with: generally, it's a statement that there exists no natural number which satisfies a particular property. The formal system cannot prove or disprove that statement.

As you said, we tend to call the statement "true" because we know that the formal system itself was designed with the intention to describe natural numbers and arithmetic, and the statement was designed intentionally to refer indirectly to itself and claim its own unprovability. Since the statement is formally unprovable, we interpret it as being true. I had forgotten that Gödel actually showed that there are other interpretations of the formal system in which the Gödel statement is false.


For anyone interested in the consistency of CH and Axiom of Choice:

https://www.scottaaronson.com/blog/?p=4974

And (of course) the HN discussion at the time:

https://news.ycombinator.com/item?id=24954695


In foundational theories that are strongly typed (such as

https://papers.ssrn.com/abstract=3457802, which is much

more powerful than first-order ZFC), one version of

the Continuum Hypothesis is still open although other

versions have been proved or disproved.

    It has *not* been proved that the open version cannot be   
    proved and it has *not* been proved that the open
    version cannot be disproved.


I think this partially answers your question: One thing people try to do to "fix" the incompleteness is to just add the Godel sentence as an axiom of the mathematical system. However, each time you do this, you can create yet another new Godel sentence for the new system with the new axiom. Since it's a diagonalization argument, no amount of adding of new axioms can permanently "fix" the system and make it complete -- there will always be new Godel sentence that is true but not provable.


From Wikipedia [0]:

> The proof constructs a particular Gödel sentence for the system F, but there are infinitely many statements in the language of the system that share the same properties, such as the conjunction of the Gödel sentence and any logically valid sentence.

[0] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...


Yes, but the "such as..." example incorporates the original "this statement is not provable" statement, so it seems a bit of a cheat. If you could exclude that statement, it would also by definition exclude any compound versions of it. It seems to me that the issue boils down to, can "this statement is not provable" be leveraged to get to something like 1 + 1 = 3?


You can exclude any particular Gödel statement (a specific string of symbols), and as you're suggesting you can even exclude an infinite sequence of derived statements too, found by systematically adding the previous ones as axioms and applying a "Gödelisation" procedure to make another.

But "this statement is not provable" isn't a statement in the logic. It's an English language description of a type of statement, one which has certain self-referential characteristics and shows us the incompleteness of the logic.

There are an infinite number of statements in the logic which fit that type by having the relevant characteristics, and they aren't all obvious. The self-referential recursion implied in Gödel's construction is designed to make it obvious, but it can be encoded in less and less obvious ways while still containing an encoded image of themselves.

You can even get to statements where the self-reference is encrypted (using an actual encryption algorithm)! Only a mathematician with the "secret key", or a clever super-mathematician with amazing brute-forcing skills, would be able to read the statement and confirm that it's true!

And so "perceivable truth" is a thing too. Some truths are too complex for an ordinary mind to verify, yet still true.

To exclude all self-referential statements in the logic of type "this statement is not provable", you would need a decision procedure to tell you which statements in the logic are those.

Unfortunately you can't make such a decision procedure. It's very similar to Turing's halting problem. Just as you can't make a program which will take any input X and tell you if the program encoded as X will eventually terminate, you can't make a procedure which will take any statement S and tell you if it contains a fancy encoding of itself.

There's just no way to do it. The fact it can be related to Turing's halting problem, which is about real machines and real algorithms running on them, may give you some idea that the core principle can be grounded in something quite down to earth which affects practical applications.


https://en.m.wikipedia.org/wiki/List_of_statements_independe...

What Gödel did prove was that the above list of statements (and infinitely more) must exist.

The special thing about his proof is indeed that it is recursive, i.e. adding another axiom can't fix the problem. But that was only necessary to prove that no formal system can be perfect, ever.

Most undecidable problems can indeed be "fixed" by adding another axiom, but if you go beyond the ones of ZFC it becomes less clear which of the two alternatives is the "right" one...


Excellent question. It implies an infinite number of statements. For if T is such an incomplete theory and X is a statement true in some models of T and false in others, then T+X is also a theory that satisfies the same requirements as T in Godel's theorem. Therefore there is another statement Y that is true in some models of T+X and false in other models of T+X. But any model of T+X is a model of T so Y is true in some models of T and false in others. And it goes on...


It implies an infinity of inconsistent statements. I'm not a mathematician, but the way I understand it is to do an analogy with the halting problem: for any programming language that is powerful enough you can build a program where you don't know if it's going to halt or not.

About avoiding singularities, the GEB book (Godel,Escher,Bach) mentions at the beginning: mathematicians like Russel tried to avoid paradoxes by moving them out of the system, but Godel shown that it doesn't work if you want a complete and consistent system (complete and consistent are technical terms, Wikipedia does a good explanation: https://en.wikipedia.org/wiki/Kurt_Gödel#Incompleteness_theo...).

Also related to your question: https://en.wikipedia.org/wiki/Chaitin%27s_constant

Sorry if I don't get the technical terms right, hopefully someone else in HN can explain this better.


I think the question is a bit more specific in the context of the halting problem

We know by contradiction that we cannot write a program that determines if any program halts, if the program being checked also contains the program that determines if itself halts.

Is this the only class of program that cannot be determined to halt?


It's a tough question to answer because it highly depends on interpretation, after all what does it mean for a program to check whether it itself halts, what does it mean for a program to be self referential? There are many sneaky and indirect ways to make a program self referential.

All undecidable languages that are recognizable in some sense contain a self referential program sneaking within it. This is because the halting problem is complete for its complexity class and hence any undecidable but recognizable language is Turing reducible to the halting problem.

However for undecidable and unrecognizable languages, there are proofs of undecidability that are independent of diagonalization and instead use other proof techniques that in a very strong sense are entirely independent of self-reference. Of course this isn't a formal answer mostly because the idea of a self referential program is not a formal concept but there are techniques such as reverse mathematics [1] that can be used to determine a minimal set of axioms needed to prove a theorem along with proof techniques that depend on the Low basis theorem [2] that are able to prove that some languages are undecidable and unrecognizable that do not in any way depend on diagonalization, which is the proof technique that is associated with self referential programs.

[1] https://en.wikipedia.org/wiki/Reverse_mathematics

[2] https://en.wikipedia.org/wiki/Low_basis_theorem


No, is not the only class of program. An easy example is to take any of the unsolved math problems (like the Goldbach conjeture) and write a program for it, if the program halts it proves the conjeture.

Can you tell if the program halts? You can’t without proving the conjeture.


Well, maybe you can actually prove the conjecture? So wouldn't you have to take an unsolvable math problem first?

Also, a lot of problems are proven unsolvable because solving them would imply solving the halting problem, but hey.


if P is (this statement cannot be proved), then you can't prove A && P for any provable A, either. There's an infinite number of statements that can be proven (1=1, 2=2, 3=3... for starters) so there's an infinite number of statements that can't be proven.

There are less trivial classes of statements that can't be proven. For instance, there's an infinite number of statements like "bit N of Chaitin's constant is 1" that cannot be proven (only a finite number of those statements are provable).

https://en.wikipedia.org/wiki/Chaitin%27s_constant#Incomplet...


There are infinitely many propositions that cannot be proved

or disproved in foundational theories of Computer Science.

    However, foundational theories exclude the [Gödel 1831] 
    proposition *I'mUnprovable* because including it would 
    make the theories inconsistent for reasons mentioned 
    elsewhere in this discussion.


So if we have a theory expressive enough to make statements about ordinary (Peano) arithmetic, we can always form a self-referential statement within the framework of this theory which we can not prove or disprove. So far, so good. Here is my question: What happens if we restrict/weaken the theory to preclude self-referential statements? Obviously, we will lose our ability to express certain arithmetic statements which correspond to self-referential statements in the original theory. But what else? Is that the only class of statements we lose? Also, are there any other kinds of statements that still make the theory incomplete?


It is very hard to detect self-referential statements and restricting yourself to "non-self-referential statements" might be quite severe.

Given a set of "domino" tiles - each having a top and a bottom. Each top and bottom has some word on it - these words can only use the letters "a" and "b". You can duplicate domino tiles and also align tiles so that all tops and all bottoms are aligned.

Now, given a finite set of such tiles, can you say whether there is an alignment so that all tops concatenated, read from right to left, equal all bottoms concatenated?

In fact, given such a set of tiles S, you can easily create a formula P(S) that is true iff such a valid alignment does not exist. Obviously this formula is true for some sets of tiles and false for others.

Now the funny thing: Given a (correct) fixed theory T in which you can state P(S) for every S and in which proofs can be computationally checked, there must be infinitely many S so that P(S) is true, but cannot be proved in T. Thus such theory T is incomplete.

Where is the self-reference?

This problem is also known as the Post correspondence problem (PCP). The halting problem can be reduced to it, which is not decidable. If T was complete, you could enumerating all proofs and see whether they correctly proof P(S) or its negation. Due to its completeness you would eventually find a proof for either of them and thus you could decide the halting problem.


We lose the ability to reason about sets of unbounded size. As long as we only restrict ourselves to some bounded subset of integers, Gödel can't do the trick with his numerals. Equivalently, on the CS side, we must restrict ourselves to total functions, that is, all valid programs must provably halt after some bounded number of steps.


No extra restriction is required because Russell's Principia

Mathematical already excludes construction of the [Gödel 1931]

proposition I'mUnprovable because of orders on propositions.


Contrary to popular opinion, [Gödel 1931] did not actually prove inferential undecidability (sometimes called "inferential incompleteness") of Russell's Principia Mathematica for the reasons explained here:

https://papers.ssrn.com/abstract=3603021

The article linked above gives a correct proof of inferential incompleteness.


I really like the conversion to a representation that's more familiar but I suspect it's also confusing me. Here are the questions that I have to start:

Does 'there-is' evaluate to a boolean (is boolean a thing? is evaluation a thing?)

I was surprised that the axioms don't use there-is but I sense I shouldn't be?

what does '(when 0 (or 0 1))' mean, it's described as 'when 0, then there is either 0 or 1' but I'm not sure what that means? is '(when 0 0)' true and '(when 1 0)' false? Why not use '(when A B)' as equals?

Not a question but in the axioms I assume pairs should be pears and applies should be apples.

How goes 'implies' get introduced by '(when (when q r) (when (or p q) (or p r))'? Also, this introduces a category in the example i.e. fruits but not sure how that happened.

Why do we need to have second '(not (factor? x 3^22 ...))', isn't it sufficient to say that (factor? x 3^21 ...) to establish that the 2nd term is 'there-is'?


For existential quantification, we use the `there-is` function.

Meanwhile, for universal quantification, we... just mention the variable. This intuitively makes sense: `there-is` is a restriction on a variable (the restriction is that there is at least one value for that variable). Meanwhile, the absence of restrictions implies universal quantification.

Specifying universal quantification like this is quite common in math proofs, and it's not too bad to get used to.


I took these to mean the existential operator and the implication operator, but I agree they were insufficiently defined for me, esp. as we're building from first principles. The post could have made the early proofs more obvious in how they were constructed (e.g. how the language is really used to prove statements).


Cf. Douglas Hofstadter's Metamagical Themas:

> In March of 1977, I met the great AI pioneer Marvin Minsky for the first time. It was an unforgettable experience. One of the most memorable remarks he made to me was this one: "Gödel should just have thought up Lisp; it would have made the proof of his theorem much easier." I knew exactly what Minsky meant by that, I could see a grain of truth in it, and moreover I knew it had been made with tongue semi in cheek. Still, something about this remark drove me crazy. It made me itch to say a million things at once, and thus left me practically speechless. Finally today, after my seven-year itch, I will say some of the things I would have loved to say then.

> [4 pages of detailed discussion of relationship between Gödel's methods and Lisp elided]

> It was for all these reasons that Minsky's pseudo-joke struck my intellectual funnybone. One answer I wanted to make was: "I disagree: Gödel shouldn't have and couldn't have invented Lisp, because his work was a necessary precursor to the invention of Lisp, and anyway, he was out to destroy PM, not Lisp." Another answer was: "Your innuendo is wrong, because any type of reference has to be grounded in a code, and Gödel's way of doing it involved no more coding machinery grounding its referential capacity than any other efficient way would have." The final answer I badly wanted to blurt out was: "No, no, no—Gödel did invent Lisp!" You see why I was tongue-tied?


That book helped me reach the conclusion that Hofstadter's self-references were annoying to the point that I wished to avoid them (and his writing in general) in the future.


I can't decide whether Gödel's method and Lisp are yet another Strange Loop [1]. Fortunately, I am still in the beginning.

[1] https://www.goodreads.com/book/show/123471.I_Am_a_Strange_Lo...


This is an exceptionally good explanation of Gödel's incompleteness theorem for the general reader.


Hi Bud!

What is actually needed is a correct exposition (such as

https://papers.ssrn.com/abstract=3603021) of how

inferential incompleteness of Russell's Principia

Mathematica follows from the computational undecidability of

the Halting Problem.

    Prior to https://papers.ssrn.com/abstract=3603021, 
    attempts to proof inferential incompleteness were 
    *incorrect* for foundational theories because of the 
    incorrect assumption that theorems of a foundational 
    theory can be computationally enumerated.


Thank you, glad you liked it :)


Well, I disliked it. I see proof that every valid formula can be converted into a Godel number. However, every invalid formula can also be converted into a Godel number. So, if we are able to construct a number, then it proves what?


The conversion of formulae to Gödel numbers is basically an implementation detail, the fact it can be done allows us to define theorems on the natural numbers that describe properties of the logical system. Encoding an invalid formula is of course possible, but I don't see why that would be a problem, it doesn't inherently 'prove' anything.


For me, it looks like use a program in C to generate a C code, and then prove that a C program can generate an invalid C code, so the C compiler must be invalid. Compilation step is missed. Compiler will refuse to compile the invalid code.

IMHO, the same situation is for Godel numbers. We can use math to generate numbers, which are equivalent to formulas, BUT why these formulas must be accepted? Initial set of axioms was carefully chosen by human, so it not an open set, where anybody can add anything.


Gödel's proof does not apply to any single axiomatic system, it applies to any computable set of axioms that can talk about numbers, quantifiers, addition and multiplication. This isn't arbitrary, it's a set of properties required for just about any interesting mathematics to take place.

The formulae are accepted because Gödel gives us a mapping, he proves that we can convert any formula into a number without losing information.


Ok, we converted formulas into unique numbers. Now, we can range these numbers, then we can remap these numbers back to natural numbers: 1 is first formula, 2 is second formula, and so on. We can produce these numbers with just «next» operator.

However, some formulas are incorrect, so we filter out them and then remap remaining functions again: 1 is first correct formula, 2 is second correct formula, and so on. Now, we can produce correct formulas with just «next» operator.

However, some formulas can contradict our system of rules, so we filter out them, and remap remaining functions again: 1 is the first correct formula which doesn't contradict the system of rules, and so on. Let's call them "Lisivka's numbers".

So, Godel's numbers can contradict axiomatic system, while Lisivka's numbers cannot.

Do you see the problem?


The point of Godel’s idea, is that he proved “ Lisivka's numbers", has a formula that can’t be proven.

Try going through the essay, and point out where the “incorrect” numbers were formed. You may be surprised to find that all statements were “correct” in the definition you are thinking of. The mathematical term is “primitive recursive” and “well-formed”


You missed the point. Any "bad" Godel's number can be interpreted as "good" Lisivka's number. We have ambiguity here, because a number can refer to any formula in infinite number of sets.

> We can go further. We can even construct PM-Lisp formulas in PM-Lisp!

No, we cannot.


When you say "You missed the point.", and "No, we cannot" -- your words come off as though you are supremely confident, and a bit condescending. This makes me not want to engage deeper with you.

To see how it feels:

No, drran, you have missed the point. Read it again, maybe you'll get it.


4667698044567788679899973453457778678980909855464564564578787890665467786780909875744564756867978980890785745635646767586445454536665474747746767457575890112345678999077554344567787923424234234234246566797899707980980983453453454353453453453534534534534

This number is equivalent to the proof that I'm right. Godel was genius!


The Gödel number a proposition in Russell's Principia

Mathematica does not correctly represent the proposition

because it leaves out the order of the proposition.


> For example, a gentleman called Frege discovered that he could craft a theory of sets, which could represent just about everything. For numbers, for example, he could do something like this: [ 0 is {}, 1 is {{}}, 2 is { {{}} {} }, etc. ]

I don’t know Frege too well, but believe this is due to von Neumann, not Frege:

https://en.wikipedia.org/wiki/Ordinal_number#Von_Neumann_def...


You might be correct, Frege defined it (roughly) in a similar way, you can read his original publication here:

https://ia800207.us.archive.org/22/items/diegrundlagende00fr...

It starts on book page 87, or PDF page 125.

Basically, what he seems to be doing is to define 0 as the number of "everything that is not equal to itself" ("die Anzahl, welche dem Begriffe 'sich selbst ungleich' zukommt"), and 1 to be the number of "everything that is equal to 0" ("die Anzahl, welche dem Begriffe 'gleich 0' zukommt"), etc.


Ah, yep, interesting! Thanks for the direct link.

Ok, I wrote a long reply trying to figure out what exactly 2 is for him in terms of modern set theory notation, but I got really muddled. German’s not my mother-tongue/the pdf isn’t searchable, and when he says “Anzahl”/“Zahl” I’m never sure if he’s talking about size or abstract property, and would have to read a lot more of the book to figure out!

[but naively: in §76 “n folgt in der natürlichen Zahlenreihe unmittelbar auf m.” is defined as “es giebt einen Begriff F und einen unter ihn fallenden Gegenstand x der Art, dass die Anzahl, welche dem Begriffe F zukommt, n ist, und dass die Anzahl, welche dem Begriffe ““unter F fallend aber nicht gleich x” zukommt, m ist””. “ is ‘n follows directly after m’ is defined as: there’s a preposition F and an object satisfying F such that the number of objects satisfying F is n and the number of objects satisfying “satisfying F but not x” is m. This really looks von Neumann like!]

He also talks about numbers in §56 (but Zahlen, not Anzahlen)...with the problem of deciding in Julius Caesar is an (ordinal?) number or not, or whether anything has Julius Caesar as a number.

[there’s also an english translation here http://www.naturalthinker.net/trl/texts/Frege,Gottlob/Frege,... but honestly it’s not much easier there :P ]

ALL of this is confusing with what wikipedia says on the matter of defining natural number “Gottlob Frege and Bertrand Russell each proposed defining a natural number n as the collection of all sets with n elements.” - which may also be the case, but lacks a citation alas. https://en.wikipedia.org/wiki/Set-theoretic_definition_of_na...

I don’t know if I have it in me to straighten this all out (that is, to RTFM).


Correct. Frege (and then following him Bertrand Russell in Principia Mathematica) used a definition when a natural number n was the set of all sets with n elements. More info and contrast with the (later chronologically) Von Neumann definition: https://en.m.wikipedia.org/wiki/Set-theoretic_definition_of_...


To me it read as though the author cleverly worded it as 'he could do something like this' to separate Frege's idea (theory of sets) from the specific example.


This is a very nice and surprisingly readable write up. I think it is the best explanation of the incompleteness theorem I've ever read. Thank you for this!

> Now, say he wanted to write when. He could just write 17.

Shouldn't it be 19?


Thanks for the kind words and the catch! Updated :) -- may take a few mins to show up


I've read Godel, Escher, Bach, and I've read this. It's a very nice explanation.

But everywhere I see, Godel's theorem is touted as some kind of deep philosophical insight, whereas from what I understand, informally it could be rephrased as "if you have a usable language for mathematical proofs, some phrases in that language must be neither true nor false (i.e. nonsensical)".

Nonsensical phrases in our human languages in nothing new, so the conclusion becomes that much less exciting. Though I'm sure it's valuable for fundamental math theory.


It’s not so much that they’re neither true nor false (they’re not paradoxes), but that they can be true in some interpretations and false in others. I wrote up some notes about this here: https://neilmadden.blog/2020/11/17/some-incomplete-thoughts-...


Wasn't that proof using a paradox, though? Then that basically really proves the existence of paradoxes. It doesn't disprove the existence of all other kinds of statements, of course.

A statement 'x == 1' might be contextual (with a free variable x), but I think the theory is talking more about statements like

  for all x
  (x + 1)^ 2 == x^2 + 2*x + 1
which is true and does not depend on x. Contrast that with

  for all x
  x == 1
which is obviously false.

But the proof outlined in the article constructed a statement with a free variable, where a substitution with a certain value leads to a statement which is neither true, nor false. That means that the general statement in the proof's example is neither true nor false either.

Sorry about the awkward terminology, BTW, my engineering degree was a decade ago and in a language other than English.


No, the statements in Gödel’s proofs are not paradoxes.

Substitutions of variables are not the same as interpretations. The interpretation tells you what the nonlogical symbols like *, +, ^, 1, 2, 3 etc mean.


To me reading this article it was striking to consider the relationship between language and (mathematicsl/logical) truth this way. In my mind it correlates with the quantum phenomenon that observing something fundamentally alters its state.


Hi Hvis!

Inferentially undecidable sentences of Russell's Principia

Mathematica are not nonsensical. However, they are obscure

because no particular example is known because no one knows

how to present them constructively.


No, it is not about nonsensical phrases. Godel proved that there are some sentences that are true, but they have no proof in the system.


Loving this so far!

One tiny typo that threw me for a loop for a second was:

> when apples are a fruit, then bananas or applies implies bananas or fruits

which should be:

> when apples are a fruit, then bananas or apples implies bananas or fruits


Will address, thanks!


Another typo, I think "Apples, Bananas, or Pairs" is probably meant to be "Apples, Bananas, or Pears"


I read stuff like this on HN and feel like I'm an child watching dad work on a car. Except papa Godel was younger than me when he wrote his paper.


I am certain there are things you are better at then he was, and I'm certain there are things that could be found that you can do that he wish he could have done.


Who cares how old he was? Life is not a competition about age ;)


"This meant that no formal system, could prove by itself, that it could only produce true statements. "

I see this generalization often, that Godel showed that all formal systems share this restriction, but doesn't the formal system have to be at least be able to express the properties of integers?

Is the ability to formally express integers relatively a particularly demanding constraint on choice of axioms?


Some things that were not clear to me, can some one help me out with them?

1) While representing 1+1 = 2 i.e (= (+ (next 0) (next 0)) (next (next 0))) which comes at the start of this article, is this considered to be just a statement? Or a true statement that follows from the axioms?

2) On formulas, this example is shown (there-is a (= (next 0) a)). It is not clear whether this is a necessarily true statement or some symbols relating a and 0.

3) He says that proofs are a sequence of formulas but colloquially they are sequence of "implications". That "implication" is introduced later on and it is shown that this "implication" symbol also needs to be proved (successor). Didn't he use the fact that one thing implies another multiple times previously?

I need the distinction amongst "statement", "formula" and "axioms".

Apologies if the above looks like a vegetable soup but I'd love some explanation for this.


Although [Gödel 1931] failed to proved inferential

undecidability of Russell's Principia Mathematica, there is a

fairly simple undecidable proposition namely, the proposition

   *Undecidable*≡Halt<ExpressionFromString<Natural>>[RunOne.[]]
where RunOne.[]≡Eval.[SelectOne.[0]] such that

   SelectOne.[i:Natural]≡ExpressionFromString<Natural>.[i] *finishesFirst* SelectOne.[i+1]
Eval.[anExpression] evaluates anExpression and where the expression

    (expression1 *finishesFirst* expression2)
returns the value of whichever of the expressions expression1

and expression2 evaluation finishes first.

The procedure SelectOne selects one of the expressions

created from strings.

For details see: https://papers.ssrn.com/abstract=3603021


There is a small typo in the above:

Undecidable ≡ Halt<ExpressionFromString<Natural>>[⦅RunOne.[]⦆] is

inferentially undecidable in the theory Actors, that is,

Undecidable and ⊬¬Undecidable where

RunOne.[]≡Eval.[SelectOne.[0]] and ⦅RunOne.[]⦆ is the

expression for the procedure application RunOne.[]


TYPO!!: In the section "Suspicious use of subst" it says that b is being replaced, but in the second formula box a has been replaced, not b. Furthermore the number that replaced a is not the same as the number used in the previous subst statement.


Great catch! Updated : ) -- should take a few mins to show up


Here's how I interpret Gödel's landmark result:

> semantic properties, except for the most trivial systems, can't be inferred from syntactic properties.

E.g. you can't type a program by seeing everything is using the correct literals and syntactic objects: you need to, sooner or later, know what the program /actually does/.

But the problem is that mathematicians do, in fact, reason about semantics. So saying that "it's uncomputable" ignores the elephant in the room, which is that mathematicians compute theorems for a living. What do mathematicians have that enables them to compute the uncomputable: well-aligned chakras, magic, coffee grounds, the zodiac, etc...?


Argh... this is the nuance I was talking about in another post. Godel's Completeness Theorem (which applies to many systems that Godel's Incompleteness Theorem also applies to) shows that in a sense semantic properties coincide with syntactic properties and so in fact can exactly be inferred from syntactic properties.

What instead Godel's Incompleteness Theorems show is that you cannot exhaustively list all the semantic properties of what you care about, but combined with Godel's Completeness Theorems, only the ones "relevant" to your syntactic axioms.


This interpretation can't be correct, because Rice's Theorem says that no semantic property of any computer program is computable.


More nuance is needed; Rice's Theorem says that there is no formula for deciding any semantic property of an arbitrary computer program. Restricted classes of computer programs are totally fine, and in some sense that is exactly what a logical system is: a description of a restricted class of programs.

But this is also getting a bit confusing because there are many different interpretations of how to tie computability to a logical system. For example the proof system of first-order logic can be interpreted as a single Turing machine program, not a family of ones, for which it is very possible to make all sorts of judgments about its semantic consequences.


But that's ignoring the main thrust, which is that mathematicians compute /all sorts/ of uncomputable systems, with no apparent restriction. It's undecidable to have a general compression algorithm, but people compress data (e.g. their core values) to their theoretical limits all the time.

If there's a landmark theorem that states "all mathematics greater than or equal to number theory is uncomputable, " and mathematicians make a living computing these systems, then I must conclude either a) mathematics is a Sokal affair on a cosmic scale b) something is wrong with the hidden presuppositions behind my theorem.


> which is that mathematicians compute /all sorts/ of uncomputable systems, with no apparent restriction

You are flattering us, but that's not actually true. We already have lots of trouble with proving (which is the same as computing, essentially) many computable things as it is. It is only occasionally that we can make progress on some uncomputable problem, and it's always because of making use of some properties very specific to said problem. The point of Rice theorem is that there's no universal method to solve any problem without any concern for the guts of that problem, and that's true: when mathematicians or computer scientists are proving that some computer program halts, they don't apply the same general machine, but instead they try to figure out some "tricks" based on the particular situation at hand, which will not generalize to every conceivable problem.


> all mathematics greater than or equal to number theory is uncomputable

That's not what Godel's Incompleteness Theorem says though. Nor would I agree that "mathematicians compute /all sorts/ of uncomputable systems, with no apparent restriction."

It is hard to describe the behavior of real-world mathematicians since that is the realm of psychology rather than logic or mathematics, but if we stick to the realm of idealized proofs, it's clear that we are only ever computing over computable descriptions of uncomputable systems (which is totally fine and something we do all the time!).

For example, the first-order theory of the real numbers (real closed fields) is a classic example of a complete and decidable theory. Every first-order statement in that theory can be proved or disproved in an automated fashion.

Yet how can this situation persist for the real numbers, entities which we clearly know are non-computable? Well the theory of real closed fields is capable of only describing a subset of the properties of the real numbers. It is worth emphasizing that this is a subset! Those properties in the theory of real closed fields apply perfectly fine to the real numbers. They just don't capture all its properties.

So what about the other properties of the real numbers that, for example, make up the field of analysis? Well we can prove those in a larger theory, e.g. ZFC. And again the proofs in ZFC are computable objects, although ZFC itself is no longer decidable. So again the description of the real numbers is a computable thing, even if now we don't know how to computably enumerate all of its consequences (but again we know how to enumerate subsets of them!).

Now does ZFC completely describe the reals? This depends on whether you're a mathematical Platonist or not. Do you believe in the notion of the one, true set of real numbers that exist on a Platonic plane independently of our axioms about them? That is there are no "independent" theorems, only theorems for which we haven't found the best way of grasping their Platonic truth? Then no ZFC does not completely describe the reals either as a result of Godel's incompleteness theorem.

Alternatively do you believe that the real numbers are an artificial mathematical construction which can have different properties depending on which axioms you are willing to admit? Then there will be properties of the real numbers which are independent of ZFC as a result of Godel's incompleteness theorems.


> Alternatively do you believe that the real numbers are an artificial mathematical construction which can have different properties depending on which axioms you are willing to admit? Then there will be properties of the real numbers which are independent of ZFC as a result of Godel's incompleteness theorems.

This is actually not correct. The propositions that are undecidable are NOT AT ALL connected to the propositions that are independent. Things like the CH or Axiom of Determinacy have to do with providing a definition to things that have no defined truth value. But the undecidable propositions are already true or false they just can't be listed or computed in any meaningful fashion.


> The propositions that are undecidable are NOT AT ALL connected to the propositions that are independent.

I'm not quite sure what you mean by undecidable propositions, but I assume it to mean something like the following?

Propositions which must be true or must be false (i.e. are satisfied/not satisfied respectively by all models which satisfy the overall theory), but for which there is no finite proof that they hold.

In contexts where Godel's Completeness theorem (not incompleteness theorem) holds undecidable propositions are exactly independent propositions. The two are one and the same. This is in fact an equivalent restatement of the Completeness theorem (which holds for contexts involving most common mathematical theories such as ZFC and Peano Arithmetic). In fact in such contexts, statements such as Con(PA) which is informally "the axioms of Peano Arithmetic are consistent" are in fact independent propositions and there are models of Peano Arithmetic that satisfy both Con(PA) and Not(Con(PA)).

What about contexts where the Completeness theorem fails (e.g. the second-order equivalents of those theories)? For example, what about CH in second-order ZFC (which is categorical under full semantics, i.e. has no independent statements and only undecidable ones?). In those contexts independence gets weird, especially in the way I think you're using it here. Usually we say that if a sentence s is independent of theory T then both s and Not(s) are consistent with T. However, consistency is phrased in terms of proofs. That is s is consistent with T if there is no finite proof in T of Not(s).

So if there are statements that are undecidable, but not independent (on the other hand all independent statements must be undecidable, although their independence may itself be decidable), we are effectively saying there exist consistent theories which have no mathematical model that satisfies them.

But this is a very weird way of doing mathematics (which is why most logicians are wary of using systems where the Completeness theorem fails to hold as foundations for mathematics). In particular it means that simply showing that a list of axioms is consistent is insufficient to allow its use. You must appeal to some other force to show that your mathematical theory is "valid" and allowed to be used.

But that's not a bar that modern mathematics tries to clear. Modern mathematicians do not try to justify the "validity" of mathematical theories beyond showing (or assuming) consistency.

For example, mathematicians generally don't hem and haw over whether the complex numbers "exist" in some real sense. They just show that they are a consistent extension of the real numbers and move on and use them.

To take a step back, the assertion that there are fundamentally undecidable mathematical propositions in the real sense you're laying out (as opposed to more formal treatments of truth in model theory) is a statement of the Platonist school of mathematical philosophy. Which is totally fine, but it is a statement of philosophy at the end of the day, rather than one of logic.


> To take a step back, the assertion that there are fundamentally undecidable mathematical propositions in the real sense you're laying out (as opposed to more formal treatments of truth in model theory) is a statement of the Platonist school of mathematical philosophy.

To go one step forward, these undecidable/uncomputable statements are computed by humans all the time, which makes the Platonist school of mathematics much more than an intellectual curio.


> these undecidable/uncomputable statements are computed by humans all the time

This is a far stronger statement than I think even Platonists would accept. What examples of non-computable statements that are computed by humans do you have in mind?

Even putting on my Platonist hat (which I'll wear for the rest of this reply), I would still argue, in line with xyzzyz's comment, that at most humans deal with computable descriptions of non-computable things. For example, the description of the real numbers is computable even if the real numbers themselves are not. Likewise even if the real numbers are not computable, we can only ever work with computably-definable (indeed only finitely definable) real numbers rather than arbitrary real numbers.

To use the map-territory analogy, while the territory is the thing we care about, the only things we can work with, manipulate, and compute are computable maps.

For example Chaitin's Constant is a non-computable real number. However, we never work with Chaitin's Constant directly, but rather with our finite description of it, from which we deduce all sorts of other facts.

There are therefore many things about Chaitin's Constant we will never know. But that is the Platonist's lot in life.

As an aside, there's in fact a formalization of this that states that a mathematical universe consisting solely of finitely definable entities and nothing else is perfectly reconcilable with ZFC. See https://mathoverflow.net/questions/44102/is-the-analysis-as-...). Now as I continue to wear my Platonist hat I don't believe for a second that this is in fact how the mathematical universe actually operates, but it is a good explanation for why it is beyond the mathematical abilities of humans to distinguish between this definable universe and the real universe we live in.


I may be wrong here, but don't mathematicians sometimes "compute" in a meta-language that's sometimes more powerful, or less formal, than the system they are inferring things about?

Hence why they need results reviewed and a community to agree upon what's acceptable as a proof. Otherwise reviewing a proof would be an automatic process, if everything was done in the world of syntactic symbols with clear rules that tie one statement to the next.


> Hence why they need results reviewed and a community to agree upon what's acceptable as a proof. Otherwise reviewing a proof would be an automatic process, if everything was done in the world of syntactic symbols with clear rules that tie one statement to the next.

Nonetheless, we may still get to the second case because there are a couple of large projects that are trying to formalize all of mathematics in a way that a computer can verify.

http://us.metamath.org/

https://arxiv.org/abs/1910.09336

If these catch on and become a deeper part of the mathematics research process, mathematicians would still discuss "what's acceptable" in terms of their views of axioms, and they would presumably still discover proofs for theorems, but review for formal correctness of inferences could be an automatic process.

Note that some pretty impressively complex and abstract theorems have successfully been formalized in these systems (and their proofs checked by machine), even starting from quite small axiom systems.

Edit: In the mathlib paper they mention a much wider set of efforts along these lines:

> In this section, we compare and contrast mathlib with other substantial formal libraries for mathematics, including libraries for Mizar [9,28], HOL Light [35], Isabelle/HOL [49], Coq/SSReflect [10,46], and Metamath [47]. Our goal here is not to provide detailed comparisons of the various de-sign choices, but, rather to sketch the design space in broad strokes, situate mathlib within it, and explain some of the decisions we have made. Our choice of comparisons is not meant to be exhaustive: there are also substantial mathematical libraries in HOL4 [55], ACL2 [42], PVS [50], and NuPRL [17], as well as notable libraries built on standard Coq, such as the Coquelicot analysis library [14].


There are two separate statements here:

1. Mathematicians often use relatively powerful systems, like reasoning in ZFC about Peano arithmetic. The consistency and completeness of axiomatic systems is what this article is about

2. Mathematicians pretty much exclusively use informal systems (compared to formal logic) for reasoning. The reason why that is is because it is just infeasible to reason typical mathematics using formal logic for most cases.

However, it's important to realize that these two are orthogonal.


Thanks for great write-up, fun to read! I only notices one tiny typo: "when apples are a fruit, then bananas or applies implies bananas or fruits" - "applies" should be "apples" I guess.


And "pairs" should be "pears"!


Will address both, thanks team!


Two more typos:

> These symbols map closely to the logical statements we are used too in programming.

too => to

> Hofstadter’s “I’m a Stange Loop”

Stange => Strange


Great catches! Updating now -- may take a few mins to show up


This is amazing.

Two thoughts:

1) It seems that the black/white formalism of ‘ a statement can be only true or false ‘ causes issues. Have people tried introducing additional options for verifiability, e.g. ‘true / false / maybe?’ or something that points to uncertainty à la Heisenberg?

2) It would be really interesting for the author to continue the line of reasoning to help add insight to Gödel’s ontological proof. ( https://en.m.wikipedia.org/wiki/Gödel%27s_ontological_proof )


Apparently someone had the following thought.

"Everyone when faced with the dilemma of having either an unsound or incomplete formal system chooses incompleteness. What happens when we allow for unsoundness and treat this notion as first class?"

There has been a slow burn on paraconsistent logics going back to the 80s or so now.

https://en.wikipedia.org/wiki/Paraconsistent_logic

I seem to recall at least one where one of the states was something analogous to a maybe/don't know.


The author of "What Gödel Discovered" has a good point that

Gödel numbers are not a convenient representation of a

proposition of Russell's Principia Mathematica (PM). (Nor

are Gödel numbers adequate because they omit the order of a

proposition.)

Lisp expressions are an improvement. However, a Lisp

expression, as used in the article, is also not adequate

because it also omits the order of the proposition.

See the following for a high-level representation of

propositions of PM that includes the crucial orders on

proportions: https://papers.ssrn.com/abstract=3459566


Nice article, I do like the way it's explained in a way that a programmer like myself can understand it.

I did note a few typos that I will take the opportunity to report here, in order to make an already great article a little better:

Under "PM-Lisp Axioms": Looking at the code "(when (or p q) (or (q p))". I don't think the (q p) should have parens around it.

Under "(proves a b)": There is no ending quote at the end of the last sentence.

Under "(subst a b c)": I find the text "PM-LIsp". I guess the I should be lower case.


Thank you for taking the time! Updated -- should take a few mins to show up :)


He used a word I don't see very often (if ever)

https://en.wikipedia.org/wiki/Epiphenomenon


If you format it that way (the spaces in front for a code block), we can't click the link.


I've fixed the formatting now.


It's quite a fun effect! Hofstadter's claim that "I" (our conciousness) is an epiphenomena was very eye opening for me.


Was this successor? function ever implemented? I'm quite skeptic. Wouldn't there be infinite possible successors? Are all of them mechanistically derivable?


Very interesting. I'm wondering if this has implications in physics ? Since "the book of nature is written in mathematical language", does this mean that attempts to find a unified theory of everything are doomed ? I'm thinking not necessarily, since we wouldn't need to prove all possible physical phenomena, only those that actually happen, but how do we know if they are among the one that can be proven ?


The implications are really on the concept of language. It proves that there is no language that can be complete; as such, we cannot express a complete system, even if it existed. So a unified theory of everything might well exist, but you wouldn’t be able to describe it. Which is a nice philosophical riddle.


I think that if you allow some axioms, you can describe it. But I'm not sure if it will be called complete.

I think because "(next axiom)" is not defined, it will make your language not mappable to integer numbers and you avoid Godel proof.


A very comprehensive theory of physics including the physics of

computation can be developed. However, such a foundational

theory will be inferentially undecidable.


Something that is never clear to me - what defines what is an axiom and what is derived from the axiom?

In Russel's PM, `(when (or p p) p)` is considered an axiom, but `=` isn't, and is derived from the other axioms.

But...why not the reverse? The axiom "when either apples or apples, then apples" just seems like another way to state "apples = apples". Why is that not the axiom that other things follow from, then?


There are two things I will argue with on this otherwise great explanation.

First, I've said this before and I'll bang this drum across every Godel post I see. Please don't introduce the notion of truth into an introductory post on Godel's Incompleteness Theorems (the Power of Numbers section). Introductory posts on Godel's Incompleteness Theorems like to say things like "there are true things you cannot prove" or in the case of this post "there are some truths that you can never write down as an algorithm." This is very nuanced and depends very heavily on what logical system you're in.

To demonstrate how subtle this point is, there is also Godel's Completeness Theorem, which can informally be summarized as "all true statement are provable," and holds for most logical systems that mathematicians use, including those to which Godel's Incompleteness Theorems apply. The subtle point of course here is that "true" means something different in both contexts.

This leads to the classic overly strong philosophical statements such as "For example, it may mean that we can’t write an algorithm that can think like a dog." That may be true, but it's not a direct consequence of Godel's Incompleteness Theorems.

That's why in an introduction I strongly strongly recommend just sticking to incompleteness, i.e. the fact that you cannot have an exhaustive list of axioms. There will always be new axioms you can add.

The second drum that I will keep banging on is that articles talking about how Godel's Incompleteness Theorems show that a system S cannot prove its own consistency and stop there miss the significance of this statement. Usually, like in this article, they go in the "opposite" direction by saying S can't prove itself, so maybe you could try using a more powerful S' to prove S, but then you couldn't prove S', and so you need an S'' to prove the consistency of S', and so on and so forth in an infinite regress. But we actually care about the opposite direction: using S to prove the consistency of S' and then using S' to prove the consistency of S'' and so on.

That is we don't actually care about using S to prove its own consistency because if we were ever doubtful of S's consistency, we wouldn't trust any proof it produced, let alone a proof of its own consistency.

Rather what is important is that we cannot prove the consistency of the stronger S' in the weaker S, since if we were able to prove the consistency of S', then we could definitely prove the consistency of the weaker S. This is the fatal blow to Hilbert's program. Hilbert was perfectly fine with having to assume the consistency of some logical system. His hope was something akin to a "trusted computing base" that if you assumed was consistent could then prove the consistency of all other more complex systems. Ideally this "trusted computing base" could be quite small, but it wouldn't necessarily have to be, as long as there was some definite size after which we could stop having to take consistency on faith.

Godel's consistency result implies that this trusted computing base cannot exist! There is no minimal base whose consistency, when taken on faith, is enough to prove the consistency of other systems we care about. That is we care about the opposite direction: S can't prove the consistency of S, so it can't prove the consistency of the larger S', so it can't prove the consistency of the still larger S'', and so on.

Now again there's some nuance here. We know that Con(S) is independent of S and in turn Con(S + Con(S)) is independent of S + Con(S) etc. but we also somehow know how to "collapse" this whole hierarchy if we know that S is consistent (the formalization of this intuition requires a deeper dive into model theory), so something is a little bit off here. Moreover stuff like Gentzen's proof of the consistency Peano Arithmetic demonstrate there is some more wiggle room in exactly what it means for a system to be logically stronger than another system.

But the most straightforward, naive way of trying to use a single trusted computing base to prove the consistency of everything else will fail.


Dear Dwohnitmok,

    1: Model theory, which formalizes "truth", is very 
       important for understanding inferential 
       incompleteness. Axioms of foundational theories of 
       Computer Science have just *one* model up to a unique 
       isomorphism, which defines "truth".
          The theorems of foundational theories of Computer 
       Science are *not* computationally enumerable. 
       Consequently, even the provable proposition's 
       *cannot* be computationally enumerated, much less the 
        ones true in the unique model.

    2. Foundational theories of Computer Science can prove 
       their own consistency because they *disallow* the 
       [Gödel 1931] proposition *I'mUnprovable*, which if it 
        were allowed would make the theories inconsistent.


Great points


In college and high school I spent years trying to understand these theorems I wish I had had this explanation then. This is the paper I wrote on the subject https://mdsoar.org/handle/11603/2249?show=full.


Just to be clear. This implies that it's possible to write some specific statement (using PM axioms) that contradicts itself? Is there a readable example of this statement without using Gödel numbers but instead just with the axiom statements?


I don't think PM itself had any contradictions inside of it, rather there are statements composed of the PM langauge that aren't reached from the axioms selected by PM -- that is one of the sentences Godel demonstrates in his proof where he gets one that basically says "this statement has no proof in this system" (so if it does have a proof it is a contradiction, but if it doesn't have a proof then PM is an incomplete system). Someone else in the thread mentioned the continuum hypothesis as something that couldn't be proved from the standard ZFC axioms, so that would be an example of a statement without proof (incompleteness) albeit not in PM. I don't know of any contradictory statements.


The continuum hypothesis is more like Euclid's parallel postulate than a Gödel sentence - assuming ZFC consistent there are models with CH true and CH false (the cardinality of the continuum doesn't have to be the first uncountable cardinal).

Everything gets qualified with "assuming ZFC consistent" or "assuming Peano consistent" because any inconsistent theory proves any statement. More of a proof technicality than anything too profound.

There is a construction of a model of Peano arithmetic, so it is consistent, as long as you accept the system used in the proof: https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof

Not sure if this sheds light on the parent commentator's question ... the terminology can be quite tricky.


Yes, PM would be made inconsistent by including the

[Gödel 1931] proposition I'mUnprovable for reasons

explained elsewhere in this discussion.

Fortunately, the rules on orders of propositions make it

impossible to construct proposition I'mUnprovable in PM.


Super awesome read, a great book about computability is "the annotated turing".


Glad you enjoyed it! :)


[Commenting before reading]

I was just reading about hilbert's problems, and his disappointment after Godel's and alan turing's papers. I am not sure if this blog is about that, but it is a very interesting topic.

Let's read it now :)


Oh, author is friends with Sebastian Marshall, that explains a thing or two :).


Indeed, we worked together in China, and have been friends since :)


Minor formatting gripe: the first inline citation (superscript 1) looks like an exponent given that it's right after a numeric constant in an equation.

It made me re-parse the sentence a few times before realizing it.


This article spectacularly misses a big chunk of the point of Lisp, by applying Gödel-numbering to the printed representation of the syntax, rather than to the object that it denotes.


This is exactly what came to my mind as I was reading this article. Would it make a difference if it was applied to syntax objects?


I thought this was an amazing article; thank you for writing it.


Thank you for the kind words :)


This is remarkably well done. I'm reading Hofstadter's Godel, Escher, Bach and found this at the perfect time. Thanks for the write up!


Man, I just searched about that book, and it is like there is a world I don't know anything about. Would you mind sharing the names of your favorite books?


I assume you might be looking for books of a similar (technical) flavour, of which I don't have too many to recommend, I'm afraid. However, here's some (across different genres) that are in my memory at this moment:

Finance/statistics : The Black Swan by Nicholas Nassim Taleb The Drunkards Walk by Leonard Mlodinow

Math/science history : Euclid's Window by Leonard Mlodinow A Short History of Nearly Everything by Bill Bryson

Physics: Newton's Principia for the Common Reader by S. Chandrasekhar

Lit: The Brothers Karamazov by Fyodor Dostoevsky House of Leaves by Mark Z. Danielewski East of Eden by John Steinbeck

Philosophy: Zen Mind, Beginner's Mind by Shunryu Suzuki Any of the upanishads but probably Kena Upanishad, Isha Upanishad, or Prashna Upanishad at first (selected for (relative) ease in readership by yours truly) Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig (for a gentle introduction into Eastern thought)

I'm missing countless others but this is what I have right now. Thanks for the prompt and happy reading! :)


EGB is one of my favourite books ever, but I think it doesn't explain Gödel's theorem as simple as it could. Of course, there are just so many wonderful things in EGB that this is not a fatal flaw by any means.

There is a fantastic book called "The Universal Computer: The Road from Leibniz to Turing" by Martin D. Davis that makes a very good job of explaining the context of Gödel's theorem, the theorem itself and the relation with computing in an accesible way. I'd recommend it to anyone interested either in computers or in Gödel's theorem.


Yup. Also recommend to those interested in existential philosophy and artificial intelligence as well. It really is a terrific book!


I like the way Roger Penrose the Oxford Physicist discusses Gödel in the LexPodcast on Artificial Intelligence : https://pca.st/episode/1e63c227-af66-4d3d-ad23-8187c9e1ca74?... (19:20)

There is also a more approachable overview on the In Our Time BBC podcast : https://pca.st/episode/a9d582c0-ec2a-0132-1126-059c869cc4eb


Gödel’s Proof by Nagel and Newman is quite good. ISBN 0814758371


See also I Am A Strange Loop by Douglas Hofstadter, and The First and Last Freedom by Jiddu Krishnamurti.


Thank you :)


Cheers! :)


Damn fine write up. Favorited this one, good job!


Glad you liked it, thank you :)


Thanks for the post. It is great seeing it written out (kind of like structure and interpretation of classical mechanics).


Thank you for the kind message : )


I never understood the fascination people have with self-referential 'paradoxes' like : "This statement is False".

Pronouns do not have an independent existence. Until the pronoun 'This' resolves to an actual statement which has a valid true/false property, it is a recurrence without termination i.e. a infinite loop, that has no meaning.

Suppose I say: 1. Sky is blue. 2. Previous statement is True. 3. Previous statement is True.

And ask you 'Is Statement 3 is True or False?'. You would say 'First I need to know what 'previous' refers to', so you would do pronoun substitution for the word 'Previous' in Statement 3:

3. 'Previous statement is True' is True.

Since this statement has an unresolved pronoun, you would do pronoun substitution for the word 'Previous' in Statement 2:

3. ''Sky is blue' statement is True' is True.

Since there are no more unresolved pronouns, you would evaluate the statement 'Sky is blue', and then the next Statement 2, and finally Statement 3, and say 'Yes, Statement 3 is True'. So far, so good ?

Now, suppose I say:

1. This statement is False.

And ask you 'Is Statement 1 True or False?'. You would say 'First I need to know what 'This' refers to', so you would do pronoun substitution for the word 'This' in Statement 1:

1. 'This statement is False' statement is False.

Since this statement has an unresolved pronoun ('This'), you would do pronoun substitution again.

- ''This statement is False' statement is False' statement is False.

Since this statement has an unresolved pronoun ('This'), you would do pronoun substitution once more.

- '''This statement is False' statement is False' statement is False' statement is False'

And you will keep doing pronoun substitutions forever since statements with unresolved pronouns cannot have a True/False property.

I find this teenage-girl-at-a-rock-concert-feigned-fascination with self-referential 'paradoxes' silly.


I think you're too quick to dismiss what is widely considered to be one of the most important and profound results in logic and computability without taking the time to fully appreciate it.

Your procedure of repeated substitution in order to resolve a paradox does little to shed light on the situation since at the heart of the halting problem is that there's no formal system that can determine whether this repeated substitution will ever come to an end.

Sure in your trivial example we could prove that it never comes to an end and determine that the statement is a paradox and label it as such, but there are some statements where it's not clear whether there's a paradox in the first place and there are some statements that are a paradox in one interpretation but in another interpretation are perfectly sensible.

Rest assured, no mathematician is scratching their head wondering whether "This statement is false." is some kind of mysterious statement whose undecidability has profound consequences. The issue is with statements like whether there exist 3 integers, x, y, z such that:

x^3 + y^3 + z^3 = 114

That statement in and of itself may very well be a "paradox" similar to a statement such as "This statement has no proof in theory T." even though on the surface it looks like a perfectly reasonable equation that should either have a solution or not have a solution. I mean either three such numbers exist or they don't exist, right?

And yet... it's possible that for the equation I gave there are solutions only under some interpretations of what we normally call natural numbers and in other interpretations of what we call natural numbers there isn't a solution, and there's no formal system that can filter out one interpretation over another so that there is one and only one unambiguous interpretation of natural numbers that we can always rely upon as the "real" interpretation.

There are perfectly normal looking sentences and problems that on the surface don't appear at all to be paradoxes or self referential, but then become so when you try to pin the question more precisely.

That's where the fascination comes from, from looking at a seemingly normal looking equation or statement about numbers that should just be true or false and realizing that whether it's true or false depends on some very deep and as-of-yet unknown properties of what we even mean when we talk about natural numbers.


PP seems to be asking for an example that is not obviously self-contradictory yet demonstrates a serious, unsurmountable problem, thereby illustrating the incompleteness theorem.

Is the theorem you describe such an example? If so, you have refuted PP with a non-handwaving result.

If not, then you've provided more evidence supporting PP's complaint.


People have a fascination with "This sentence is false" because it highlights the need to be precise distinguishing between the language of a logical system and a metalanguage to talk about the language. That discovery or 'invention' if you will is something that was not put into rigorous foundation until the 20th century.


Foundational theories of Computer Science would be rendered

inconsistent if they allowed the construction of the

proposition I'mFalse or the [Gödel 1931] proposition

I'mUnprovable using fixed points.

    Fortunately, orders on propositions prohibit the 
    construction of the proposition *I'mFalse* and also 
    prohibit the construction of the [Gödel 1931] 
    proposition *I'mUnprovable*.


Well, the liar paradox dates to antiquity, the halting problem has been well known for 70+ years, etc., and yet.... it turns out we can still use logic and software to solve lots of practical problems anyway, simply by avoiding self-referential contradictions and infinite loops.

Arithmetic seems to have a number (so to speak) of practical applications as well.

Perhaps it's a bit like theory vs. practice, or maybe mathematics vs. engineering.


Just imagine the number of times the author has used the word "imagine", lol.


Cool! I've always loved the sort of nihilism of the incompleteness theorems


I've been trying to understand it for so long and finally it's clear


Glad you enjoyed it! :)


Such a quality write-up, I understood the topic easily ! Thank you :-)


Cheers and thank you!


Whats the difference amongst formula, statement, proof and successor?


Shouldn't you use 2 for a '(' instead of 1?


Nevermind, my mistake


Very off-topic, but long ago when Albert Einstein went for a car ride with Mr. Gödel to become U.S. citizens, Einstein was trying very hard to think of ways to shut him up about a Constitution loophole he discovered. Things didn't go exactly as planned, and in front of the naturalization examiner, Mr. Gödel started blabbing about how he had found a way the U.S could be transformed into a fascist regime...

Sources: https://robert.accettura.com/wp-content/uploads/2010/10/Morg...

https://en.wikipedia.org/wiki/Kurt_Gödel#cite_note-23


In light of current circumstances this story has actually been on my mind! Has there ever been any more detail revealed about Godel's scheme?


There is some more information at

https://jeffreykegler.github.io/personal/morgenstern.html

There's also an article about this question at

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2010183


Thanks the second one finally answered the question:

In summary, Gödel’s loophole is that the amendment procedures set forth in Article V self-apply to the constitutional statements in Article V themselves, including the entrenchment clauses in Article V. Furthermore, not only may Article V itself be amended, but it may also be amended in a downward direction (i.e., through an “anti-entrenchment” amendment making it easier to amend the Constitution). Lastly, the Gödelian problem of self-amendment or anti-entrenchment is unsolvable. In addition, the author identifies some “non-Gödelian” flaws or “design defects” in the Constitution and explains why most of these miscellaneous design defects are non-Gödelian or non-logical flaws.


I'm sad that there isn't an explanation of Godel's proof!

But, given that we can amend the constitution, it does seems fairly straightforward that a dictator could come to power, if there was approval of a super majority of the people.


Wait what? Do tell, the links aren't great on my phone.


The constitution is BS anyway.

For example, Article I, section 10 says "No state shall...coin money, emit bills of credit, make any thing but gold and silver a tender in payment of debts..."

And look what the US has today; fiat money. No only is it not gold or silver, but it's not event backed by it.

Based on this definition, nobody in the US earns any money so nobody should have to pay any income tax since basically everyone's income is 0 ounces of gold and 0 ounces of silver. Yet this argument would not stand in court even though it's 100% logical.

We're all just a bunch of monkeys. The implementation of the law always comes down to what the most popular monkeys think it is.

If the most popular monkeys think that the definition of 'gold' actually means 'feces', then before you know it, we will all start throwing feces at each other as a means of payment.

What's the point of defining the law using words if people will later distort the meaning of those words to suit their purpose? you may as well rely on gut feelings and majority rule to decide what is right or wrong.


Key word is “state”. The point is that the states don’t coin money - the federal government does.


This is yet one more attempt to distort the original meaning of the law by drawing attention to specific words that are immaterial to the original intent of the law.

The states are in fact printing money which is not gold or silver no matter how you look at it... It doesn't matter that they are all printing the same currency. This activity is happening within the physical confines of individual states and they are using that money to pay their debts.

The simplest interpretation is the correct one. Especially when you consider the history of the US.

If the law was merely about not allowing states have sovereignty over their own fiat currencies then they would not have mentioned the words 'gold' or 'silver'.


Aside from your gold bug commentary, I do find your poop example intriguing as it's probably one of the worst examples possible for currency:

- literally anyone can make it

- it is unsanitary and can spread disease

- it is difficult to measure

On the plus side:

- it does represent (some amount of) labor


I think it would be an amazing currency.

- Good alternative to Universal Basic Income since people can produce it themselves directly.

- The unsanitary aspect of it will help prevent too large accumulation of it. People will have a stronger incentive to put their money to work instead of hoarding it.

- Not difficult to measure since it can be weighed and banks could print paper bills that are backed by it using weight as the units of account.

The main downside I can see is that it will encourage people to eat more and demand for laxatives would grow significantly.


And like any fine metal you can polish it! https://www.youtube.com/watch?v=yiJ9fy1qSFI


Fiat money is not issued by states, it is issued by the Federal government.

I think you are confusing the word "state".


Lispers learned marketing, bravo! The only issue is the missing credit to the Quanta article https://www.quantamagazine.org/how-godels-incompleteness-the....


That's pretty rude of you. I didn't read that essay, but I did reference about 4 essays / books that I did read :).

Went through the quanta article -- it seems reasonably different (doesn't go deep on how proof, subst, work, etc) -- I suspect the similarities stem from both of us reading Nagel and Newman's book.


Ok, then I apologize, it seemed very similar and recent.


Cheers!


Hi Amboo7!

The article "What Gödel Discovered" improves on the Quanta

article linked above by using Lisp expressions instead of

Gödel numbers.

The proof in "What Gödel Discovered" is still incorrect

because the Lisp expressions it uses leave out the crucial

orders on propositions in Russell's Principia Mathematica.


> This proof showed that “1 + 1”, does indeed equal “2”. It took 2 volumes to get here.

I know this seems logical to mathematicians, but it feels to me like having to take 2 volumes to prove something than any child knows intuitively is... I don't know what word I am looking for... obsessive?


You can imagine "1 + 1 = 2" not as an important question in this context, but rather a proof of concept for the system they devised. It's a bit like the story about a "black triangle" [0]: a rather useless model passed through a rendering pipeline to produce a result any OpenGL novice can scramble in a couple dozen lines, but showing that the whole pipeline does, indeed, actually work as intended.

[0] https://rampantgames.com/blog/?p=7745


They're describing a new "programming" language, and demonstrating their "hello world".


The point is not to prove that 1+1=2. The point is to construct mathematics (all of mathematics) directly from the axioms using a formal language. The objective of 1+1=2 is not interesting, but the result is a rigorous foundation of mathematics using formal languages.

The upshot is twofold: (1) we have a rigorous foundation of mathematics in formal logic which CAN be used to prove nontrivial statements, as in the application of model theory to mathematics, and (2) we can now build proof assistants that allow us to use computers to better understand proofs.

Both applications of formal reasoning are equally interesting and have wide ramifications in mathematics.


The idea is that 1+1 proof is made without the mathematical apparatus(High-level language) available after 1+1, i.e. its like a huge Assembler bootloader made from verified primitives(logical primitives) to load an operating system with High-level API(Mathemathics).


They didn't set out to explicitly prove 1+1=2, they just got around to it after 2 volumes, it didn't require 2 volumes of background. A direct proof of 1+1=2 is pretty short in most logical systems.


The point was to prove that math wasn't an arbitrary human construct, that it was rooted in the fundamental nature of reality.


we mostly live our lives as if P!=NP, but that does not detract from the importance of proving it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: