HN2new | past | comments | ask | show | jobs | submitlogin

OpenAI has clear and strange definition of AGI in contract with MSFT: it should produce 100B economic impact.


Does it count if the impact is negative?


Thanks for pointing that out, I missed that. Very curious how they'll measure that. Given they're in the double-digit billions of revenue I'd assume they can reason they'll be there very soon.


That not their definition of AGI. It's "a highly autonomous system that can outperform humans at most economically valuable work."


Your half of the definition is implied, but uninteresting. They would not see the 100B economic impact without your definition being realized. But what is curious about it is that it is also not to be considered AGI without meeting the value marker. "a highly autonomous system that can outperform humans at most economically valuable work." alone is not sufficient.


>Your half of the definition is implied, but uninteresting.

How is it uninteresting? Open AI had revenue of $12B last year without monetizing literally hundreds of millions of free users in any way whatsoever (not even ads).

Microsoft's cloud revenue has exploded in the last few years off the back of AI model services. Let's not even get into the other players.

100B in economic impact is more than achievable with the technology we have today right now. That half is the interesting part.


> Open AI had revenue of $12B last year

And it could have been $1T for all anyone cares. The impact was delivered by humans. This is about impact delivered by AGI.


That makes no sense. Money generated by direct usage is economic impact by the model.

If you use GPT-N substantially in your work, then saying that impact rests solely on you is nonsensical.


> Money generated by direct usage is economic impact by the model.

But not at the "hand" of AGI. Perhaps you forgot to read your very own definition? Notably the "autonomous" part.

When AGI is set free and starts up "Closed I", generating $12B in economic value without humans steering the wheel, we will be (well, I will be, at least!) throughly impressed. But Microsoft won't be. They won't consider it AGI until it does $100B.

> If you use GPT-N substantially in your work, then saying that impact rests solely on you is nonsensical.

And if you use a hammer substantially in your work to generate $100B in value, a hammer is AGI according to you? You can hold that idea, but that's not what anyone else is talking about. The primary indicator of AGI, as you even said yourself, is autonomy.


Maybe you missed the part where I explicitly said that wasn't their definition ?

“A highly autonomous system that outperforms humans at most economically valuable work.” is what's in their charter.

$100B in profits is a separate agreement with Microsoft that makes no mention of autonomity.

>And if you use a hammer substantially in your work to generate $100B in value, a hammer is AGI according to you? You can hold that idea, but that's not what anyone else is talking about. The primary indicator of AGI, as you even said yourself, is autonomy.

The primary indicator of AGI is whatever you want it to be. The words themselves make no promises of autonomity, simply an intelligence general in nature. We are simply discussing Open AI's definitions.


> $100B in profits is a separate agreement with Microsoft that makes no mention of autonomity.

Again, autonomy is implied when talking about AGI. OpenAI selling tools like GPT or dishwashers, even if they were to provide the $100B in economic impact, would not satisfy the agreement. It is specifically about AGI, and there should be no confusion about what AGI is here as you helpfully defined it for us.


AGI - Artificial General Intelligence

Where in that acronym is human level autonomy implied ?

Norvig would tell you we already have AGI.

https://www.noemamag.com/artificial-general-intelligence-is-...

If Open AI didn't mention autonomy in their Microsoft agreement then it's not part of the equation and no court will take, "It was obvious" as an argument.


> If Open AI didn't mention autonomy in their Microsoft agreement

What they mentioned was that their systems have to generate the profit. That requires autonomy. It needn't be explicitly mentioned. It cannot be any other way.

A human swinging a hammer would see the profit attributed to the human, not the hammer. A human "swinging" GPT would see the profit attributed to the human, not GPT. A windmill operates autonomously and thus any profit it generates would be attributed to it. However, a windmill doesn't have broad ability to outperform humans across many tasks.

Coding agents are approaching having the autonomy of a windmill. I expect this is where we will start to see the first semblance of AGI, where you will be able to say "My problem is X" and come back in a few days and have a program written to solve it. However, that is still just a "windmill". It doesn't generalize to a wide range of tasks as your definition and most other definitions of AGI expect.

You are right that details on the agreement are slim enough that we cannot say for sure if Microsoft would accept such a coding agent as being AGI, profits notwithstanding. However, even if they would, it is unlikely a coding agent alone would be able to achieve $100B in profits under any kind of human timescale. The value of software will be effectively nothing when there is no effort involved to create it. And for that reason, we can say with near certainty that the rest of your definition will also be required...

You gave it for a reason.


Who knew we've had AGI for something like three hundred years? (Or, only had NGI for so long?)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: