Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

> I've never actually read that in fiction.

I find that hard to believe. Ever watch Terminator?

But even if true, that science-fictional plot is so pervasive it would be easy to pick up from the millions who have the software engineer's blurry line between fantasy and reality.

> It's just logical really.

OK, then. You're a GI, go off and build an army of better yous and take over the world.



The idea is indeed logical and stupidly obvious, once you learn the basics of what "optimization" means, or what "recursion" is.

> I find that hard to believe. Ever watched Terminator?

Terminator has fuck all to do with recursive self-improvement. Don't confuse people who grew up on sci-fi with people who casually went to see Terminator or some other pop-culture artifact featuring some kind of "AI".

> OK, then. You're a GI, go off and build an army of better yous and take over the world.

What do you think the drama with eugenics, genetic engineering and designer babies is around? It's literally humans trying to make better humans in the only way that is available - reproduction.

AI made in silica would be more malleable, easier and cheaper to replicate. Self-improving software isn't even a fantasy; it exists in many forms - though it's far from open-ended like a self-improving GI would be.


It is not technically logical to think one can see the future, but it is colloquially logical.

Judging reality by how it appears is a bad strategy, this should be common knowledge by now.

What's concerning to me is that I suspect LLM's will be able to learn and remember thousands of basic facts like this, and ~reason on top of them. Perhaps they won't figure this out on their own, but what if all it takes is one individual to point them in this direction? I bet there are numerous people who know much more about this than me working for our various three letter agencies.


>>> I've never actually read that in fiction.

>> I find that hard to believe. Ever watched Terminator?

> Terminator has fuck all to do with recursive self-improvement. Don't confuse people who grew up on sci-fi with people who casually went to see Terminator or some other pop-culture artifact featuring some kind of "AI".

You're not following the thread. The future timeline in Terminator does involve something like an AI making "a billion more robots [to] take over the world." The popularity of that and similar sci-fi makes that claim that someone has never encountered it hard to believe.

> What do you think the drama with eugenics, genetic engineering and designer babies is around?

So how has that been going? Those things should also probably be labeled "science fiction."

> AI made in silica would be more malleable, easier and cheaper to replicate. Self-improving software isn't even a fantasy; it exists in many forms - though it's far from open-ended like a self-improving GI would be.

Fantasies based on squishy assumptions. How do you know it would have an easier job optimizing itself than humans have? How do you know there isn't some fundamental contradiction in the concept of "superintelligence" that these fantasies are based on? Or even just some practical resource limits that makes the fantasy impossible?


> The future timeline in Terminator does involve something like an AI making "a billion more robots [to] take over the world."

Yes. That's distinctly different from Skynet iterating on itself a billion times to make itself smarter, which AFAIK (I'm not up to date with full Terminatorverse, but then, most people aren't either), isn't something that happened in that story.

> The popularity of that and similar sci-fi makes that claim that someone has never encountered it hard to believe.

Again, there's very little in mass-market sci-fi of what we're discussing here. And most people, including many in tech, have a hard time wrapping their heads around the idea of a feedback loop, so no, I don't think it's something readily available from mass-market sci-fi.

(But the more niche, better thought-out works, will teach you feedback loops, and this is just one of the ways recursive self-improvement becomes an obvious idea.)

> So how has that been going? Those things should probably be labeled "science fiction."

Eugenics? We had to ban it and create such a strong cultural (and legal) repulsive field around it, that it impedes biotech and medical research.

Designer babies? Weren't attempts made in China recently? And in the West, we're already correcting congenital defects, so all in all, it's less "science fiction", and more "science someone is going to apply soon, if they haven't already".

> How do you know it would have an easier job optimizing itself than humans have?

Because it was created by us, using processes and media that are strongly optimized for malleability. Software, algorithms, digital data, optimization models. All well-defined (and comprehensible to an AGI, by definition) - unlike our own minds, which were not made by us but by a dumb, random process, and that the brains are made of stupidly complex nanotech instead of simple transistors is not helping.

Also because the kind of models we're now worried about gain capability through an optimization process that's open-ended, and limited only by availability of training data and compute. So if e.g. a successor of GPT-4 were to become AGI, it would be set up for recursive self-improvement from day one.

> How do you know there isn't some fundamental contradiction in the concept of "superintelligence" that these fantasies are based on? Or even just some practical resource limits that makes the fantasy impossible?

Maybe, but what makes you think this is the case? We know of some fundamental limits to compute, but we're very, very far from hitting them. Otherwise, I don't know of anything that would put a cap on intelligence at around human level. Remember: by the very nature of evolution, we're the dumbest possible beings capable of learning and building a technological civilization. There may be better brain designs than ours, but ours "took off", and we took over the world.


Sadly I can't build a better me as I'm not of robotic construction. And I was a being a bit flippant with the world takeover. But as soon as AI reached human level it would quickly go beyond it given the rate these things improve, allowing it to get to to work on improved models. As something along those lines in the real world think the Tesla robots but improved with far better AI.

Actually thinking about it I wouldn't rule out Musk/Tesla going for the world takeover thing;)


> Sadly I can't build a better me as I'm not of robotic construction.

Why the fuck not? You literally have all the code to manufacture a person.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: