AGI is here. 90%+ of white collar work _can_ be done by an LLM. We are simply missing a tested orchestration layer. Speaking broadly about knowledge work here, there is almost nothing that a human is better at than Opus 4.6. Especially if you're a typical office worker whose job is done primarily on a computer, if that's all AGI is, then yeah, it's here.
Opus is the very best and I still throw away most of what it produces. If I did not carefully vet its work I would degrade my code bases so quickly.
To accurately measure the value of AI you must include the negative in your sum.
That "simple orchestration layer" (paraphrased) is what I consider the AGI.
But yeah, I suspect LLM:s may actually get close enough. "Just" add more reasoning loops and corresponding compute.
It is objectively grotesquely wasteful (a human brain operates on 12 to 25 watts and would vastly outperform something like that), but it would still be cataclysmic.
If we can get AI down to this power requirement then it's over for humans. Just think of how many copies of itself thinking at the levels of the smartest humans it could run at once. Also where all the hardware could hide itself and keep itself powered around the world.
Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.
I suspected it wasn't just battery farms, but more like what you see in less mass market scifi where the humans are used for more than just batteries... they'd also be some storage and processing for the system (and no longer humans).
However at that point I don't see the value of retaining the human form. It's for a story obviously, but a not-human computational device can still be made out of carbon processing units rather than silicon or semiconductors generally.
I ran a quick experiment with Claude and Perplexity, both free versions. I input some retirement info (portfolios balances etc), my age, my desired retirement age etc. Simple stuff that a financial planner would have no issue with. Perplexity was very very good on the surface. Rarely made an obvious blunder or error, and was fast. Claude was much slower and despite me inputting my exact birthdate, kept messing up my age by as much as 18 months. This obviously screws up retirement planning. I also asked some questions about how RMDs would affect my taxes, and asked for some strategies. Perplexity was convinced that I should do a Roth conversion to max up to the 22% bracket, while Claude thought that the tax savings would be minimal.
Mind you, I used the EXACT same prompts. I don't know which model Perplexity was using since the free version has multiple it chooses from (including Claude 3.0).
AGI is when it can do all intellectual work that can be done by humans. It can improve its own intelligence and create a feedback loop because it is as smart as the humans who created it.
No, that is ASI. No human can do all intellectual work themselves. You have millions of different human models based on roughly the same architecture to do that.
When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.
"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks."
This is a statement that I've always found to be circular and poorly defined for the other reasons I've listed. Any technology that even gets close isn't AGI like I said, it's ASI for the reasons of duplication and time to train.
It is also a line of thinking that will bite us in the ass if humans aren't as general of thinkers as we make ourselves out to be.
This has always been my personal definition of AGI. But the market and industry doesn't agree. So I've backed off on that and have more or less settled on "can do most of the knowledge work that a human can do"
Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?
It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.
API Opus 4.6 will tell you it's still 2025, admit it's wrong then revert back to being convinced it's 2025 as it nears it's context limit.
I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.
> there is almost nothing that a human is better at than Opus 4.6.
Lolwut. I keep having to correct Claude at trivial code organization tasks. The code it writes is correct; it’s just ham-fisted and violates DRY in unholy ways.
I’m very pro AI coding and use it all day long, but I also wouldn’t say “the code it writes is correct”. It will produce all kinds of bugs, vulnerabilities, performance problems, memory leaks, etc unless carefully guided.
The market has clearly passed it by. I was a huge Heroku fan. It even inspired my first startup in 2014 (basically a healthcare tech version of Heroku). At the time, I thought it was the future, and found messing around in AWS, etc., too time-consuming and unnecessary. That was when Rails was all the rage.
1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.
We’ve seen many times that platforms can be popular and widely disliked at the same time. Facebook is a clear example.
The difference there is it became hated after it was established and financially successful. If you need to turn free visitors in to paying customers, that general mood of “AI is bad and going to make me lose my job/fuck up society” is yet another hurdle OpenAI will have to overcome.
Yeah, every single big website is totally free. People have complex emotions toward Facebook, Instagram and TikTok, but they don't have to pull out their wallet. That's a bridge too far for many people.
It’s incorrect to point out that consumers have rejected AI.
The strategy here is more valid in my opinion. The value in AI is much more legible when the consumer uses it directly from their chat UI than whatever enterprises can come up with.
I can suggest many ways that consumers can use it directly from chat window. Value from enterprise use is actually not that clear. I can see coding but that’s about it. Can you tell me ways in which enterprises can use AI in ways that is not just providing their employees with chaggpt access?
Usually when I hear about people using ChatGPT they are usually just using it as a search engine that delivers summarized results. The average person wouldn't use email if they had to pay for it, good luck making money off of all of those visitors without just becoming another ad tech company competing with the other ad tech companies.
Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?
It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.
Well, in the world where AGI is created and it goes suboptimally, everybody gets turned into computronium and goes extinct, which is a prospect some are miffed about. And, in the world where it goes well, no decision of any consequence is made by a human being ever again, since the computer has planned every significant life event since before their birth. Free will in a very literal sense will have been erased. Sam being a true believer means he is not going to stop working until one of these worlds comes true. People who understand the stakes are understandably irked by him.
He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.
And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.
Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.
It's sort of two books combined into one: The first one is the story of OpenAI from the beginning, with all the drama explained with quotes from inside sources. This part was informative and interesting. It includes some details about Elon being convinced that Demis Hassabis is going to create an evil super-intelligence that will destroy humanity, because he once worked on a video game with an evil supervillain. I guess his brain was cooked much earlier than we thought.
The second one is a bunch of SJW hand-wringing about things that are only tangentially related, like indigenous Bolivians being oppressed by Spanish Conquistadors centuries ago. That part I don't care for as much.
Not a case, society call them sociopaths. Witch includes power struggle, manipulation and physiological abuse of the people around them.
Example, Sam Altman and OpenAI hoarding 40% of the RAM supply as unprocessed wafers stored in warehouses bought with magical bubble investors money in GPUs that don't exist yet and that they will not be able to install because there's not enough electricity to feed such botched tech, in data centers that are still to be built, with intention to punch the competence supply, and all the people of the planet in the process along two years (at least).
Yep the various -path adjectives get overused but in this case he's the real deal, something is really really off about him.
You can see it when he talks, he's clearly trying (very unconvincingly) to emulate normal human emotions like concern and empathy. He doesn't feel them.
People like that are capable of great evil and there's a part of our lizard brains that can sense it
Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.
I don't and I see Sam Altman as a greater fraud than that (loathsome) individual. And I don't think Sam gets through the coming bubble pop without being widely exposed (and likely prosecuted) as a fraudster.
The CEO just has to have followership: the people who work there have to think that this is a good person to follow. Even they don't have to "like" him
HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.
You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.
He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.
Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.
All this drama is mostly irrelevant outside a very narrow and very online community.
The demise of OpenAI is rooted in the bad product market fit, since many people like using ChatGPT for free, but fewer are ready to pay for it. And that’s pretty much all there is to it. OpenAI bet on consumers, made a slopstagram that unsurprisingly didn’t revolutionise content, and doesn’t sell as many licenses as they would like.
I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).
He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).
I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.
I hope OpenAI continues to dominate even if the margins of winning tighten.
It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.
Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.
Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.
Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.
There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"
The watershed moment for me was when he pretended to be a top tier gamer on Path of Exile. Anyone in the know saw right through it, and honestly makes me wonder if we just spotted this behavior because it's "our turf", but actually he and people like him just operate this way in absolutely everything they do
Props to him for letting people mute him on his own platform. The issue with Sam and OpenAI is they their bias on any controversional topic can't be switched off.
So? I bet you think you're clever. You're using platforms daily that are ran by insane people. Don't forget that the internet itself was a military invention.
Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.
I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.
He's definitely not. If Altman. Is a "typical" SF/SV
tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.
Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.
Your argument is guilt by association. Association with something that isn't morally wrong, it's just a way to try to spend money on charity in an effective way? You can take a lot of ideas too far and end up with a bad result of course.
Demis is the reason Google is afloat with a good shot at winning the whole race. The issue currently is he isn’t willing to become the alphabet CEO. IMHO he’ll need to for the final legs.
> I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.
If you nail the bar to the floor, then sure, you can pass over it.
> He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.
I don't now what your definition of extreme is but by mine he's pretty extreme.
> I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.
All of them suffer from thinking their money makes them somehow better.
> I hope OpenAI continues to dominate even if the margins of winning tighten.
I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.
> All of them suffer from thinking their money makes them somehow better.
Let's assume they think they're better than others.
What makes you think that they think it's because of their money, as opposed to, say, because of their success at growing their products and businesses to the top of their field?
Do they talk about money that much? 99.99% of the people I see talking about money (especially other people's money and what they should be doing with it) are non-billionaires.
That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.
I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.
I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.
Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.
It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.
When fortune 500, 100 and 50 organizations are buying AI coding tools at scale (I know from personal exp), then I would say you're late. So yes. Late stage adoption for this wave.
I run a company that services 1,000+ clients on Slack, another 300+ on Teams, and a < 100 on Email/Gchat
I wouldn't wish Teams on my worst enemy, so in that regard, I love Slack
The thing I struggle with the most is how I'd move all of our core functionality from Slack. A lot of the people/teams that build these "Slack killers" I don't think have ever run Slack at scale
How are you going to replace the 30+ in-house apps I've built that automate 50+ workflows?
How are you going to replace the 100+ workflows I use with 1,000+ clients when they have to submit a ticket, or questionnaire, or a security event?
How are you going to replace the 100+ partner channels I have where we send out automated messages about specials and discounts we're running?
What about the 500+ other apps I run that integrate with our systems? Are they going to support your new platform?
Do you have retention settings? DLP? How granular can I go on permissions? What about picking up events via the API so I can train people in real time on what not to do in public channels?
I have no affinity or personal ties to Slack. But if you're going to position yourself as a Slack competitor you have to actually do what Slack does
Feels like the more important question is how are you going to do all these things when Slack cuts you off, or there is some new Slack policy that prevents it, or they increase their pricing by 1000%
Haven’t you basically built your entire business on this singular proprietary platform they you have almost no control over?
> Feels like the more important question is how are you going to do all these things when Slack cuts you off
I pay Slack $50k/year. They have no reason to shut me off.
> or there is some new Slack policy that prevents it
Prevents what exactly? The new API pricing they introduced doesn't apply to internal apps. I suppose they could apply it to internal apps. We'd have to figure out a path around it
> or they increase their pricing by 1000%
1000% increase in pricing seems incredibly unlikely. That would not only disrupt thousands of companies but would likely kill Slack entirely
---
> Haven’t you basically built your entire business on this singular proprietary platform they you have almost no control over?
Not really. We service clients through Slack. Could we switch? Sure. Would it be a pain? Yeah. Would it be costly? Yeah.
But there's also no reason to switch. And if a new platform comes out (like the one this thread is about), I would expect them to have the features to compete with Slack if they are posiitioning themselves as a Slack competitor
Every third party you contract with can pull the rug from under you this way, even this new startup with its 'forever free tier'.
You plan for it as a potential risk just like anything else and, if the time comes, you can work on migrating out. Companies will off board third parties all the time if the financials don't add up.
$50k a year? Those are rookie numbers. You're actually fine, as a small fish going belly up isn't the end of the world. You can start a new business. For some big tech companies this is potentially near existential. I would know.
Ok, but what stops same from happening with any other solution? There are two things that would "fix" it:
* Fully open and interoperable protocol: We had it (XMPP), it was flawed, but at one beautiful moment in time it worked and using same protocol I could contact both google and facebook contacts. Then the companies decided "no, we would prefer to keep a walled gardens rather than make it easy to move to competition.
* Fully open source (no open core nonsense, latest Mattermost rugpull from OSS part users being one example why) chat platform with corporate backing and SaaS option - there is Matrix but afaik it is lacking feature-wise, tho I havent used it much. With plugin app store so it is possible to make and even sell integrations with other systems.
Second option seems more viable but it takes a lot of effort to make something as good as Slack or Discord
I didn’t read the full site but it seems they’re not really going for those users?
Anyone who has dozens of custom workflows and apps in their Slack is probably spending 10s of thousands of dollars on Slack. It is probably vital to their business.
This seems like it’s for small teams (like 3-5 people even, collaborating daily) who get rekt really fast before they’re forced to spend $60 a month.
i have a side gig with 3 other people, we use slack chat for daily coms and webhooks. we meet weekly over discord becuase huddles are a paid feature, are you planning on implemening voip?
after 4 years we're almost at the point where i feel its worth spending $ for different types of convinient features.
Like a lot of other apps that go viral, it's a novelty that wears off in a few weeks (if that). There's nothing that keeps people on the platform. How many goofy AI generated videos does someone actually want to make? 10? 20? After you've done that, what else is entertaining or amusing? Any _really_ good Sora videos are posted on TikTok or Reels anyway. There's nothing tying you to the platform.
> Salaries for developers are well under $150k in most of the United States, for example, and that is for senior engineers
As someone who has hired hundreds of SWEs over the last 12 years from 20+ states, I have to disagree.
$150k is on the lower end for base for a Sr. SWE, and well below the total comp someone would expect. You can make the argument that $150k base is reasonable, but even Sr. SWE in the middle of the country are looking for closer to $180k -$200k OTE.