HN2new | past | comments | ask | show | jobs | submitlogin

If AGI is ever achieved, it would open the door to recursive self improvement that would presumably rapidly exceed human capability across any and all fields, including AI development. So the AI would be improving itself while simultaneously also making revolutionary breakthroughs in essentially all fields. And, for at least a while, it would also presumably be doing so at an exponentially increasing rate.

But I think we're not even on the path to creating AGI. We're creating software that replicate and remix human knowledge at a fixed point in time. And so it's a fixed target that you can't really exceed, which would itself already entail diminishing returns. Pair this with the fact that it's based on neural networks which also invariably reach a point of sharply diminishing returns in essentially every field they're used in, and you have something that looks much closer to what we're doing right now - where all competitors will eventually converge on something largely indistinguishable from each other, in terms of ability.



> revolutionary breakthroughs in essentially all field

This doesn't really make sense outside computers. Since AI would be training itself, it needs to have the right answers, but as of now it doesn't really interact with the physical world. The most it could do is write code, and check things that have no room for interpretation, like speed, latency, percentage of errors, exceptions, etc.

But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals, it can't figure more out about plants that humans feed into the training data. Regarding math, math is human-defined. Humans said "addition does this", "this symbol means that", etc.

I just don't understand how AI could ever surpass anything human known before we live by the rules defined by us.


[in Morpheus voice]

"But when AI got finally access to a bank account and LinkedIn, the machines found the only source of hands it would ever need."

That's my bet at least - especially with remote work, etc. is that if the machines were really superhuman, they could convince people to partner with it to do anything else.


You mean like convincing them to invest implausibly huge sums of money in building ever bigger data-centres?


It is interesting that, even before real AGI/ASI gets here, that "the system wants what it wants", like capitalism + computing/internet creates the conditions for an infinite amplification loop.

I am amazed, hopeful, and terrified TBH.


Feedback gain loops have a tendency to continue right up to the point they blow a circuit breaker or otherwise drive their operating substrate beyond linear conditions.


This made me laugh and feel scared simultaneously.


I assume someone has already written it up as a sci-fi short story, but if not I'm tempted to have a go...


It starts to veer into sci-fi and I don't personally believe this is practically possible on any relevant timescale, but:

The idea is a sufficiently advanced AI could simulate.. everything. You don't need to interact with the physical world if you have a perfect model of it.

> But, what other fields would it do this in? How can it makes strives in biology, it can't dissect animals ...

It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.


Didn't we prove that it is mathematically impossible to have a perfect simulation of everything though (i.e. chaos theory)? These AIs would actually have to conduct experiments in the real world to find out what is true. If anything this sounds like the modern (or futuristic version) of empiricism versus rationalism debate.

>It doesn't need to dissect an animal if it has a perfect model of it that it can simulate. All potential genetic variations, all interactions between biological/chemical processes inside it, etc.

Emphasis on perfection, easier said than done. Some how this model was able to simulate millions of years of evolution so it could predict vestigial organs of unidentified species? We inherently cannot model how a pendulum with three arms can swing but somehow this AI figured out how to simulate evolution millions of years ago with unidentified species in the Amazon and can tell you all of its organs before anyone can check with 100% certainty?

I feel like these AI doomers/optimists are going to be in a shock when they find out that (unfortunately) John Locke was right about empiricism, and that there is a reason we use experiments and evidence to figure out new information. Simulations are ultimately not enough for every single field.


It’s plausible in a sci-fi sort of way, but where does the model come from? After a hundred years of focused study we’re kinda beginning to understand what’s going on inside a fruit fly, how are we going to provide the machine with “a perfect model of all interactions between biological/chemical processes”?

If you had that perfect model, you’ve basically solved an entire field of science. There wouldn’t be a lot more to learn by plugging it into a computer afterwards.


> You don't need to interact with the physical world if you have a perfect model of it.

How does it create a perfect model of the world without extensive interaction with the actual world?


How will it be able to devise this perfect model if it can't dissect the animal, analyze the genes, or perform experiments?


Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly. An ant isn't asking us how we measure the acidity of the atmosphere. It would simply do it via some mechanism we can't implement or understand ourselves.

But, again with the caveats above: if we assume an AI that is infinitely more intelligent than us and capable of recursive self-improvement to where it's compute was made more powerful by factorial orders of magnitude, it could simply brute force (with a bit of derivation) everything it would need from the data currently available.

It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.


> Well, first, it would be so far beyond anything we can comprehend as intelligence that even asking that question is considered silly.

This does not answer the question. The question is "how does it become this intelligent without being able to interact with the physical world in many varied and complex ways?". The answer cannot be "first, it is superintelligent". How does it reach superintelligence? How does recursive self-improvement yield superintelligence without the ability to richly interact with reality?

> it could simply brute force (with a bit of derivation) everything it would need from the data currently available. It could iteratively create trillions (or more) of simulations until it finds a model that matches all known observations.

This assumes that the digital encoding of all recorded observations is enough information for a system to create a perfect simulation of reality. I am quite certain that claim is not made on solid ground, it is highly speculative. I think it is extremely unlikely, given the very small number of things we've recorded relative to the space of possibilities, and the very many things we don't know because we don't have enough data.


>The idea is a sufficiently advanced AI could simulate.. everything

This is a demonstrably false assumption. Foundational results in chaos theory show that many processes require exponentially more compute to simulate for a linearly longer time period. For such processes, even if every atom in the observable universe was turned into a computer, they could only be simulated for a few seconds or minutes more, due to the nature of exponential growth. This is an incontrovertible mathematical law of the universe, the same way that it's fundamentally impossible to sort an arbitrary array in O(1) time.


The counter-argument to this from the AI crowd would be that it's fundamentally impossible for _us_, with our goopy brains, to understand how to do it. Something that is factorial-orders-of-magnitude smarter and faster than us could figure it out.

Yes, it's a very hand-wavey argument.


You're right, but how much heavy lifting is within this phrase?

> if it has a perfect model


It feels very much like "assume a spherical cow..."


A perfect model of the world is the world. Are you saying AI will become the universe?


You can be super-human intelligent, and still not have a perfect model of the world.


We aren't that far away from AI that can interact with physical world and run it's own experiments. Robots in humanoid and other forms are getting good and will be able to do everything humans can do in a few years.


>And, for at least a while, it would also presumably be doing so at an exponentially increasing rate.

Why would you presume this? I think part of a lot of people's AI skepticism is talk like this. You have no idea. Full stop. Why wouldn't progress be linear? As new breakthroughs come, newer ones will be harder to come by. Perhaps it's exponential. Perhaps it's linear. No one knows.


No one knows, but it's a reasonable assumption surely. If you're theorising a AGI, that has recursive self improvement, exponential improvements seem almost unavoidable. AGI improves understanding of electronics, physics etc, that improves the AGI leading to new understandings and so on. Add in that new discoveries in one field might inspire the AGI/humans to find things in others and it seems hard to imagine a situation where theres not a lot of progress everywhere (at least theoretical progress, building new things might be slower / more costly then reasoning they would work.)

Where I'm skeptical of AI would be in the idea an LLM can ever get to AGI level, if AGI is even really possible, and if the whole thing is actually viable. I'm also very skeptical that the discoveries of any AGI would be shared in ways that would allow exponential growth; licenses stopping using their AGI to make your own, copyright on the new laws of physics and royalties on any discovery you make from using those new laws etc.


>If you're theorising a AGI, that has recursive self improvement, exponential improvements seem almost unavoidable.

Prove it.

Also, AI will need resources. Hardware. Water. Electricity. Can those resources be supplied at an exponential rate? People need to calm down and stop stating things as truth when they literally have no idea.


Well said. It does seem that many who speculate on this are not taking into account limits that more/faster processing won’t actually help much. Say an algorithm is proven to be O(n!) for all cases, at a certain size of n, there’s not much that can be done if the algorithm is needed as is.


Which is why I am an agnostic. :)


It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime. Full stop if you dont find that obvious you should probably figure out where your bias is coming from. Coding and algorithmic advance does not require real world experimentation.


> Coding and algorithmic advance does not require real world experimentation.

That's nothing close to AGI though. An AI of some kind may be able to design and test new algorithms because those algorithms live entirely in the digital world, but that skill isn't generalized to anything outside of the digital space.

Research is entirely theoretical until it can be tested in the real world. For an AGI to do that it doesn't just need a certain level of intelligence, it needs a model of the world and a way to test potential solutions to problems in the real world.

Claims that AGI will "solve" energy, cancer, global warming, etc all run into this problem. An AI may invent a long list of possible interventions but those interventions are only as good as the AI's model of the world we live in. Those interventions still need to be tested by us in the real world, the AI is really just guessing at what might work and has no idea what may be missing or wrong in its model of the physical world.


If AGI has human capability, why would we think it could research any faster than a human?

Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

It might scale up, it might not, we don’t know. We won’t know until we reach it.

We also don’t know if it scales linearly. Or if it’s learning capability and capacity will able to support exponential capability increase. Our current LLM’s don’t even have the capability of self improvement or learning even if they were capable: they can accumulate additional knowledge through the context window, but the models are static unless you fine tune or retrain them. What if our current models were ready for AGI but these limitations are stopping it? How would we ever know? Maybe it will be able to self improve but it will I’ll take exponentially larger amounts of training data. Or exponentially larger amounts of energy. Or maybe it can become “smarter” but at the cost of being larger to the point where the laws of physics mean it has to think slower, 2x the thinking but 2x the time, could happen! What if an AGI doesn’t want to improve?

Far too many unknowns to say what will happen.


> Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.


This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.

Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.

So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.


> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.


Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.


What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.


Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.


Only if it’s economically feasible. If it takes a city sized data center and five countries worth of energy, then… probably not going to happen.

There are too many unknowns to make any assertions about what will or won’t happen.


> ...the fact that the [AGI] can/will work on the issue 24/7...

Are you sure? I previously accepted that as true, but, without being able to put my finger on exactly why, I am no longer confident in that.

What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level. -- Marvin to Arthur Dent

(...as an anecdote, not the impetus for my change in view.)


>Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.

Driving A to B takes 5 hours, if we get five drivers will we arrive in one hour or five hours? In research there are many steps like this (in the sense that the time is fixed and independent to the number of researchers or even how much better a researcher can be compared to others), adding in something that does not sleep nor eat isn't going to make the process more efficient.

I remember when I was an intern and my job was to incubate eggs and then inject the chicken embryo with a nanoparticle solution to then look under a microscope. In any case incubating the eggs and injecting the solution wasn't limited by my need to sleep. Additionally our biggest bottleneck was the FDA to get this process approved, not the fact that our interns required sleep to function.


If the FDA was able to work faster/more parallel and could approve the process significantly quicker, would that have changed how many experiments you could have run to the point that you could have kept an intern busy at all times?


It depends so much on scaling. Human scaling is counterintuitive and hard to measure - mostly way sublinear - like log2 or so - but sometimes things are only possible at all by adding _different_ humans to the mix.


My point is that “AGI has human intelligence” isn’t by itself enough of the equation to know whether there will be exponential or even greater-than-human speed of increase. There’s far more that factors in, including how quickly it can process, the cost of running, the hardware and energy required, etc etc

My point here was simply that there is an economic factor that trivially could make AGI less viable over humans. Maybe my example numbers were off, but my point stands.


This is fundamentally flawed. There are upper bounds of efficiency that are laws of nature. To assume AI would be supernatural is magical thinking.


Natural intelligence appears supernatural from our current understanding, so it's not surprising that AGI also appears so.


Neither appears supernatural from a scientific understanding.


And yet it seems to be the prevailing opinion even among very smart people. The “singularity” it’s just presumed. I’m highly skeptical to say the least. Look how much energy it’s taking to engineer these models which are still nowhere near AGI. When we get to AGI it won’t be immediately super intelligent and perhaps it never will be. Diminishing returns surely apply to anything that is energy based?


Perhaps not, but what is the impetus of discovery? Is it purely analysis? History is littered with serendipitous invention; shower-thoughts lead to some of our best work. What's the AGI-equivalent of that? There is this spark of creativity that is a part of the human experience, which would be necessary to impart onto AGI. That spark, I believe, is not just made up of information but a complex weave of memories, experiences and even emotions.

So I don't think it's a given that progress will just be "exponential" once we have an AGI that can teach itself things. There is a vast ocean of original thought that goes beyond simple self-optimization.


This sounds like a romanticization of creativity.

Fundamentally discovery could be described as looking for gaps in our observation and then attempting to fill in those gaps with more observation and analysis.

The age of low hanging fruit shower thought inventions draws to a close when every field requires 10-20+ years of study to approach a reasonable knowledge of it.

"Sparks" of creativity, as you say, are just based upon memories and experience. This isn't something special, its an emergent property of retaining knowledge and having thought. There is no reason to think AI is incapable of hypothesizing and then following up on those.

Every AI can be immediately imparted with all expert human knowledge across all fields. Their threshold for creativity is far beyond ours, once tamed.


> It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime.

Those observations only lead to scaling research linearly, not exponentially.

Assuming a given discovery requires X units of effort, simply adding more time and more capacity just means we increase the slope of the line.

Exponential progress requires accelerating the rate of acceleration of scientific discovery, and for all we know that's fundamentally limited by computing capacity, energy requirements, or good ol' fundamental physics.


Prove it.


Or bottlenecked by data availability just like we humans are. Nothing will be exponential if a loop in the real world of science and engineering is involved.


Aren't we bottlenecked by not having any "prior art", as in not having reverse engineered any thinking machine like even a fly's brain? We can't even agree on a definition of consciousness and still don't understand the brain or how it works (to the extent that reverse engineering it can tell us something).


Coding and algorithmic advance does not require real world experimentation.


Right but for self improving AI, training new models does have a real world bottleneck: energy and hardware. (Even if the data bottleneck is solved too)


I always consider different options when planning for the future, but I'll give the argument for exponential:

Progress has been exponential in the generic. We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000, as the prior million, and so on, all the way back to multicellular life evolving over 2 billion years or so.

There's a question of the exponent, though. Living through that exponential growth circa 50AD felt at best linear, if not flat.


So you concede that there's nothing special about AI versus earlier innovations?


> Progress has been exponential in the generic.

Has it? Really?

Consider theoretical physics, which hasn't significantly advancement since the advent of general relativity and quantum theory.

Or neurology, where we continue to have only the most basic understanding of how the human mind actually works (let alone the origin of consciousness).

Heck, let's look at good ol' Moore's Law, which started off exponential but has slowed down dramatically.

It's said that an S curve always starts out looking exponential, and I'd argue in all of those cases we're seeing exactly that. There's no reason to assume technological progress in general, whether via human or artificial intelligence, is necessarily any different.


I think you're talking about much shorter timelines than I am.

That's all noise.


> We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000

I hear this sort of argument all the time, but what is it even based on? There’s no clear definition of scientific and technological progress, much less something that’s measurable clearly enough to make claims like this.

As I understand it, the idea is simply “Ooo, look, it took ten thousand years to go from fire to wheel, but only a couple hundred to go from printing press to airplane!!!”, and I guess that’s true (at least if you have a very juvenile, Sid Meier’s Civilization-like understanding of what history even is) but it’s also nonsense to try and extrapolate actual numbers from it.


Plotting the highest observable assembly index over time will yield an exponential curve starting from the beginning of the universe. This is the closest I’m aware of to a mathematical model quantifying the distinct impression that local complexity has been increasing exponentially.


There is no particular reason to assume that recursive self-improvement would be rapid.

All the technological revolutions so far have accounted for little more than a 1.5% sustained annual productivity growth. There are always some low-hanging fruit with new technology, but once they have been picked, the effort required for each incremental improvement tends to grow exponentially.

That's my default scenario with AGI as well. After AGI arrives, it will leave humans behind very slowly.


Suppose you don't have a hammer, but just hammer at things with bare hands. Then you find some primitive rock - artificial general hammering! With that you can over time build some primitive hammer - now we're talking superhuman general hammering. With that you can then build a better hammer more quickly, and boom, you have recursive self-improvement, and soon you'll take over the world.


Energy. Is the only limiting factor. If a true AGI would emerge, it will immediatelly try to secure energy sources or advances in efficiency.

You cannot beat humans with megawatts!


> diminishing returns

I think this is a hard kick below the belt for anyone trying to develop AGI using current computer science.

Current AIs only really generate - no, regenerate text based on their training data. They are only as smart as other data available. Even when an AI "thinks", it's only really still processing existing data rather than making a genuinely new conclusion. It's the best text processor ever created - but it's still just a text processor at its core. And that won't change without more hard computer science being performed by humans.

So yeah, I think we're starting to hit the upper limits of what we can do with Transformers technology. I'd be very surprised if someone achieved "AGI" with current tech. And, if it did get achieved, I wouldn't consider it "production ready" until it didn't need a nuclear reactor to power it.


Absolutely. All the talk around AGI being some barrier through which unheard of glories can be unlocked sound very much like "perpetual motion machine" talk.


> If AGI is ever achieved, it would open the door to recursive self improvement ...

They are unrelated. All you need is a way for continual improvement without plateauing, and this can start at any level of intelligence. As it did for us; humans were once less intelligent.

Using the flagship to bootstrap the next iteration with synthetic data is standard practice now. This was mentioned in the GPT5 presentation. At the rate things are going I think this will get us to ASI, and it's not going to feel epochal for people who have interacted with existing models, but more of the same. After all, the existing models are already smarter than most humans and most people are taking it in their stride.

The next revolution is going to be embodiment. I hope we have the commonsense to stop there, before instilling agency.


> As it did for us; humans were once less intelligent.

Do we know what drove the increases in intelligence? Was it some level of intelligence bootstrapping the next level of intelligence? OR was it other biophysical and environmental effects that shaped increasing intelligence?


BTW, it appears that the Flynn effect might have reversed recently.

US: "A reverse Flynn effect was found for composite ability scores with large US adult sample from 2006 to 2018 and 2011 to 2018. Domain scores of matrix reasoning, letter and number series, verbal reasoning showed evidence of declining scores."

https://www.sciencedirect.com/science/article/pii/S016028962...

https://www.forbes.com/sites/michaeltnietzel/2023/03/23/amer...

Denmark: "The results showed that the estimated mean IQ score increased from a baseline set to 100 (SD: 15) among individuals born in 1940 to 108.9 (SD: 12.2) among individuals born in 1980, since when it has decreased."

https://pubmed.ncbi.nlm.nih.gov/34882746/

https://pubmed.ncbi.nlm.nih.gov/34882746/#&gid=article-figur...


A lot of people correlate it with humans moving from a vegetarian diet to a omnivorous diet.

1. Higher nutrition levels allowed the brain to grow. 2. Hunting required higher levels of strategy and tactics than picking fruit off trees. 3. Not needing to eat continuously (as we did on vegetation) to get what we needed allowed us time to put our efforts into other things.

Now did the diet cause the change, or the change necessitate the change in diet... I don't think we know.


I've read that social pressures were the primary driver. But robots don't have to take the same path. We're doing the hard work for them...

https://www.sciencedirect.com/topics/psychology/social-intel...


Exactly... evolution doesn't select for intelligence. It favors robustness.


That's only assuming there are no fundamental limits or major barriers to computation. Back a hundred years ago at the dawn of flight, one could have said a very similar thing about aircraft performance. And for a time in the 1950s, it looked like aircraft speed was growing exponentially over time. But there haven't been any new airspeed records (at least, officially recorded) since 1986, because it turns out going Mach 3+ is fairly dangerous and approaching some rather severe materials and propulsion limitations, making it not at all economical.

I would also not be surprised if the process of developing something comparable to human intelligence, assuming the extreme computation, energy, and materials issues of packing that much computation and energy into a single system could be overcome, the AI also develops something comparable to human desire and/or mental health issues. There is a not-zero chance we could end up with AI that doesn't want to do what we ask it to do or doesn't work all the time because it wants to do other things.

You can't just assume exponential growth is a forgone conclusion.


For some reason people pre suppose super intelligence into AGI. What if AGI had diminishing returns around human level intelligence? They still have to deal with all the same knowledge gaps we have.


Those problems aren't just waiting on smarts/intelligence. Those would require experimentation in the real world. You can't solve chemistry by just thinking about it really hard. You still have to do experiments. A super intelligent machine may be better at coming up with experiments to do than we are, but without the right stuff to do them, it can't 'solve' anything of the like.


> So the AI would be improving itself

Why would the AI want to improve itself? From whence would that self-motivation stem?


At the point where it can even be said to have a self, that battle is mostly won.

I am very far from convinced that we are at or near that point.


This reminded me of a few subplots in Murderbot- (Do yourself a favor and check it out if you haven't, it's a fun, quick read)

But seriously, one would assume there's a reward system of some sort at play, otherwise why do anything?


Recursive improvement without any physical change maybe limited. If any physical change like more gpu or different network configuration is required to experiment and again change to learn from it that might not be easy. Convincing human to do on AGI behalf may not be that simple. There might be multiple path to try and teams may not agree with each other. Specially if the cost of trial is high.


AI can be trained on some special knowledge of person A and another special knowledge of person B. These two persons may never met before and therefore they can not combine their knowledge to get some new knowledge or insight.

AI can do it fine as it knows A and B. And that is knowledge creation.


> But I think we're not even on the path to creating AGI.

It seems like the LLM model will be component of an eventual AGI, it's voice per se, but not its mind. The mind still requires another innovation or breakthrough we haven't seen yet.


Math... lots and lots of math solutions. Like if it could figure out the numerical sign problem, it could quite possibly be able to simulate all of physics.


Well it could also self-improve increasingly slowly.


You are missing the point where synthetic data, deterministic tooling (written by AI) and new discoveries by each model generation feeds into the next model. This iteration is the key to going beyond human intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: