Hacker News new | past | comments | ask | show | jobs | submit login

> Theorem proving is a single-player game with an insanely big search space, I always thouht it would be solved long before AGI.

This seems so weird to me - AGI is undefined as a term imo but why would you expect "producing something generally intelligent" (i.e. median human level intelligence) to be significantly harder than "this thing is better than Terrence Tao at maths"?




My intuition tells me we humans are generally very bad at math. Proving a theorem, in an ideal way, mostly involves going from point A to point B in the space of all proofs, using previous results as stepping stones. This isn't particularly a "hard" problem for computers which are able to navigate search spaces for various games much more efficiently than us (chess, go...).

On the other hand, navigating the real world mostly consists in employing a ton of heuristics we are still kind of clueless about.

At the end of the day, we won't know before we get there, but I think my reasons are compelling enough to think what I think.


I don't think that computers have an advantage because they can navigate search spaces efficiently. The search space for difficult theorems is gigantic. Proving them often relies on a combination of experience with rigorous mathematics and very good intuition [1] as well as many, many steps. One example is the classification of all finite simple groups [2], which took about 200 years and a lot of small steps. I guess maybe brute forcing for 200 years with the technology available today might work. But I'm sceptical and sort of hope that I won't be out of a job in 10 years. I'm certainly curious about the current development.

[1] https://terrytao.wordpress.com/career-advice/theres-more-to-...

[2] https://en.m.wikipedia.org/wiki/Classification_of_finite_sim...


Oh for sure, when I say "soon" it's only relative to AGI.

What I meant to convey, is that theorem proving at least is a well-defined problem, and computers have had some successes in similar-ish search problems before.

Also I don't think pure brute-force was ever used to solve any kind of interesting problem.

Chess engines make use of alpha-beta pruning plus some empirical heuristics people came up with over the years. Go engines use Monte-Carlo Tree Search with straight deep learning models node evaluation. Theorem proving, when it is solved, will certainly use some kind of neural network.


I also think humans are bad at math. And that we are probably better at IRL but maybe IRL has more data anyway


Replace "this thing is better than Terrence Tao at maths" with "this thing is better than Garry Kasparov at chess" and your statement would sound equally reasonable in the early 90s.


Because being good/competent at EVERYTHING is harder than being great at any one thing for a computer, since generality requires fluid intelligence but a single specialist skill can be achieved through brute force search, as we've seen over and over again from DeepBlue 30 years ago to the present time.

In the meantime, while DeepBlue beat the world chess champion, Kasparov, at chess, our best efforts at generalism - LLMs than many (not me!) think are the path to AGI - struggle to play tic tac toe.


Because "Generally Intelligent" is a very broad and vague term.

"Better than Terrence Tao at solving certain formalized problems" (not necessarily equal to "Better that Terrence Tao at maths) isn't.


Not to join the hate train but it probably isn't better than Terrence Tao at solving the problems either, since this Tao would have probably been able to solve the problem himself and would have probably taken less time. There were participants in this contest that were better than the AI.

Still, an impressive result for sure.


Seems fair - I strongly think that "certain formalized problems" is very very far away from doing actual maths! And that actual maths is sometimes much less about solving known conjectures and much more about developing new theories.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: