HN2new | past | comments | ask | show | jobs | submit | ryanSrich's commentslogin

AGI is here. 90%+ of white collar work _can_ be done by an LLM. We are simply missing a tested orchestration layer. Speaking broadly about knowledge work here, there is almost nothing that a human is better at than Opus 4.6. Especially if you're a typical office worker whose job is done primarily on a computer, if that's all AGI is, then yeah, it's here.


Opus is the very best and I still throw away most of what it produces. If I did not carefully vet its work I would degrade my code bases so quickly. To accurately measure the value of AI you must include the negative in your sum.


I would and have done the same with Jr. devs. It's not an argument against it being AGI.


I'm countering the basis of your original claim; "there is almost nothing that a human is better at than Opus 4.6". This is simply not true.


That "simple orchestration layer" (paraphrased) is what I consider the AGI.

But yeah, I suspect LLM:s may actually get close enough. "Just" add more reasoning loops and corresponding compute.

It is objectively grotesquely wasteful (a human brain operates on 12 to 25 watts and would vastly outperform something like that), but it would still be cataclysmic.

/layperson, in case that wasn't obvious


If we can get AI down to this power requirement then it's over for humans. Just think of how many copies of itself thinking at the levels of the smartest humans it could run at once. Also where all the hardware could hide itself and keep itself powered around the world.


> a human brain operates on 12 to 25 watts

Yeah, but a human brain without the human attached to it is pretty useless. In the US, it averages out to around 2 kW per person for residential energy usage, or 9 kW if you include transportation and other primary energy usage too.


Fair.

Maybe the Matrix (1999) with the human battery farms were on to something. :)


I suspected it wasn't just battery farms, but more like what you see in less mass market scifi where the humans are used for more than just batteries... they'd also be some storage and processing for the system (and no longer humans).

However at that point I don't see the value of retaining the human form. It's for a story obviously, but a not-human computational device can still be made out of carbon processing units rather than silicon or semiconductors generally.


I think "tested" is the hard part. The simple part seems to be there already, loops, crons, and computer use is getting pretty close.


I ran a quick experiment with Claude and Perplexity, both free versions. I input some retirement info (portfolios balances etc), my age, my desired retirement age etc. Simple stuff that a financial planner would have no issue with. Perplexity was very very good on the surface. Rarely made an obvious blunder or error, and was fast. Claude was much slower and despite me inputting my exact birthdate, kept messing up my age by as much as 18 months. This obviously screws up retirement planning. I also asked some questions about how RMDs would affect my taxes, and asked for some strategies. Perplexity was convinced that I should do a Roth conversion to max up to the 22% bracket, while Claude thought that the tax savings would be minimal.

Mind you, I used the EXACT same prompts. I don't know which model Perplexity was using since the free version has multiple it chooses from (including Claude 3.0).


AGI is when it can do all intellectual work that can be done by humans. It can improve its own intelligence and create a feedback loop because it is as smart as the humans who created it.


No, that is ASI. No human can do all intellectual work themselves. You have millions of different human models based on roughly the same architecture to do that.

When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.


"Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks."


This is a statement that I've always found to be circular and poorly defined for the other reasons I've listed. Any technology that even gets close isn't AGI like I said, it's ASI for the reasons of duplication and time to train.

It is also a line of thinking that will bite us in the ass if humans aren't as general of thinkers as we make ourselves out to be.


This has always been my personal definition of AGI. But the market and industry doesn't agree. So I've backed off on that and have more or less settled on "can do most of the knowledge work that a human can do"


Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.


API Opus 4.6 will tell you it's still 2025, admit it's wrong then revert back to being convinced it's 2025 as it nears it's context limit.

I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.


Giving opus a memory or real-time access to the current year is trivial. I don't see how that's an argument against it being AGI.


> there is almost nothing that a human is better at than Opus 4.6.

Lolwut. I keep having to correct Claude at trivial code organization tasks. The code it writes is correct; it’s just ham-fisted and violates DRY in unholy ways.

And I’m not even a great coder…


I’m very pro AI coding and use it all day long, but I also wouldn’t say “the code it writes is correct”. It will produce all kinds of bugs, vulnerabilities, performance problems, memory leaks, etc unless carefully guided.


So it's even more human than we thought


This is entirely solvable with skills, memory, context, and further prompting. All of which can be done in a way that's reliable and repeatable.

You wouldn't expect a Jr. dev to be the best at keeping things dry either.


> there is almost nothing that a human is better at than Opus 4.6.

> You wouldn't expect a Jr. dev to be the best at keeping things dry either.

So a junior dev is better than almost all humans at everything?


Yea the “you’re holding it wrong” argument. Never takes long to pop up.

> You wouldn't expect a Jr. dev to be the best at keeping things dry either.

Did you read the comment I replied to? The premise was

> there is almost nothing that a human is better at than Opus 4.6.

So which is it? Is Claude the junior dev “better at” most things than a human or not? Sorry, you can’t play your argument both ways.


> violates DRY in unholy ways

Well said


Can llms manipulate spread sheets?



The market has clearly passed it by. I was a huge Heroku fan. It even inspired my first startup in 2014 (basically a healthcare tech version of Heroku). At the time, I thought it was the future, and found messing around in AWS, etc., too time-consuming and unnecessary. That was when Rails was all the rage.


> At the time, I thought it was the future, and found messing around in AWS, etc., too time-consuming and unnecessary.

Sounds like you were right on both counts?


That's the idea yeah. There are other people actively working on this. You can follow vx-underground on twitter. They're tracking it.


Seems like a job for an LLM


Quite the opposite if you want to trust the results


I think there are two things that happened

1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.

2. Sam Altman is profoundly unlikable.


> Consumers have mostly rejected AI.

People like to complain about things, but consumers are heavily using AI.

ChatGPT.com is now up to the 4th most visited website in the world: https://explodingtopics.com/blog/chatgpt-users


We’ve seen many times that platforms can be popular and widely disliked at the same time. Facebook is a clear example.

The difference there is it became hated after it was established and financially successful. If you need to turn free visitors in to paying customers, that general mood of “AI is bad and going to make me lose my job/fuck up society” is yet another hurdle OpenAI will have to overcome.


Yeah, every single big website is totally free. People have complex emotions toward Facebook, Instagram and TikTok, but they don't have to pull out their wallet. That's a bridge too far for many people.


Are they paying through? Reddit was also popular for a long time and didn't make much money.

My point was more that it seems this wave of AI is more profitable if you're in B2B vs. B2C.


It’s incorrect to point out that consumers have rejected AI.

The strategy here is more valid in my opinion. The value in AI is much more legible when the consumer uses it directly from their chat UI than whatever enterprises can come up with.

I can suggest many ways that consumers can use it directly from chat window. Value from enterprise use is actually not that clear. I can see coding but that’s about it. Can you tell me ways in which enterprises can use AI in ways that is not just providing their employees with chaggpt access?


Usually when I hear about people using ChatGPT they are usually just using it as a search engine that delivers summarized results. The average person wouldn't use email if they had to pay for it, good luck making money off of all of those visitors without just becoming another ad tech company competing with the other ad tech companies.


#2 cannot be understated


Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?


It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.


Advertising Generated Income?



Damm this is smart. I like it


Someone else said it first here


it's even worse than that and i hope people recognize that it's not that he's a True Believer (though the TBs are often hilarious)

it's that he has no ethics to speak of at all. it's not that he's out of touch, it's that he simply does not care.


Why would him believing in AGI make people dislike him?

He is clearly disliked by a lot of tech community, I don't see his AGI belief as a big part of that.


Well, in the world where AGI is created and it goes suboptimally, everybody gets turned into computronium and goes extinct, which is a prospect some are miffed about. And, in the world where it goes well, no decision of any consequence is made by a human being ever again, since the computer has planned every significant life event since before their birth. Free will in a very literal sense will have been erased. Sam being a true believer means he is not going to stop working until one of these worlds comes true. People who understand the stakes are understandably irked by him.


Well, he made mistake many billionaires do, he opened his mouth with his own thoughts, instead of just reading what PR department told him to read


All the manipulation and lying that got him fired.


He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.

And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.

Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.


He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.

Interesting that he's got as far as he has with this issue. I don't think you can run a company effectively if you don't deal in truth.

Some of his videos have seemed quite bizarre as well, quite sarcastic about concerns people have about AI in general.


> He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.

And today it seems everyone will at YC hate him but pretend not


Saw Empire of AI in a bookshop recently but held off buying as wasn’t sure if it was going to be surface level. You’d recommend?


Understandable worry, but it's not surface-level at all. Karen Hao is a great journalist. Highly recommend.


It's sort of two books combined into one: The first one is the story of OpenAI from the beginning, with all the drama explained with quotes from inside sources. This part was informative and interesting. It includes some details about Elon being convinced that Demis Hassabis is going to create an evil super-intelligence that will destroy humanity, because he once worked on a video game with an evil supervillain. I guess his brain was cooked much earlier than we thought.

The second one is a bunch of SJW hand-wringing about things that are only tangentially related, like indigenous Bolivians being oppressed by Spanish Conquistadors centuries ago. That part I don't care for as much.


Not a case, society call them sociopaths. Witch includes power struggle, manipulation and physiological abuse of the people around them.

Example, Sam Altman and OpenAI hoarding 40% of the RAM supply as unprocessed wafers stored in warehouses bought with magical bubble investors money in GPUs that don't exist yet and that they will not be able to install because there's not enough electricity to feed such botched tech, in data centers that are still to be built, with intention to punch the competence supply, and all the people of the planet in the process along two years (at least).


> Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.

For a brief moment I thought you were talking about Elon there


He is a sociopath. It's ok to say it.


Yep the various -path adjectives get overused but in this case he's the real deal, something is really really off about him.

You can see it when he talks, he's clearly trying (very unconvincingly) to emulate normal human emotions like concern and empathy. He doesn't feel them.

People like that are capable of great evil and there's a part of our lizard brains that can sense it


Sounds like when people are politicking he just takes a “whatever” approach haha. That seems reasonable.


No, that's not what he's doing.


Cringey to watch their interviews.


*Overstated


oops, yup.


Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.


Scariest part is it probably won't, and he'll be back in five year with something else.


Do you see Sam Bankman-Fried getting reinstated?

I don't and I see Sam Altman as a greater fraud than that (loathsome) individual. And I don't think Sam gets through the coming bubble pop without being widely exposed (and likely prosecuted) as a fraudster.


People lying to everyone lie to themselves the most


Instead of anecdotes about “what you saw on TikTok and Reddit”, it’s really not that hard to lookup how many paid users ChatGPT has.

Besides OpenAI was never going to recoup the billions of dollars based on advertising or $20/month subscriptions


Is CEO likeability a reliable predictor?


I think it depends how visible the CEO is to (potential) customers, in this case very visible, he is in the media all the time.


They pay to be in the media


good point.

I don't think it is at all

The CEO just has to have followership: the people who work there have to think that this is a good person to follow. Even they don't have to "like" him


Ask Tesla about the impact of their CEOs likeability on their sales.


> OpenAI bet largely on consumer

Source on that?

Lots of organizations offer ChatGPT subscriptions, and Microsoft pushes Copilot as hard as it can which uses GPT models.


Those who is publicly hating LLMs still use them though, even for the stuff the claim to hate, like writing fanfic.


HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.


> and about to be an order of magnitude more so, once they add ads.

How do you figure?


You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.


Ermmm what?

He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.

Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.


All this drama is mostly irrelevant outside a very narrow and very online community.

The demise of OpenAI is rooted in the bad product market fit, since many people like using ChatGPT for free, but fewer are ready to pay for it. And that’s pretty much all there is to it. OpenAI bet on consumers, made a slopstagram that unsurprisingly didn’t revolutionise content, and doesn’t sell as many licenses as they would like.


Imo they'll soon make a lot of money with advertisement. Whenever chatgpt brings you to some website to buy a product they will get some share.


good luck with that when there's gemini which does it far better


Ilya took a swing at the king and missed. It would have been awkward to hang around after that debacle.


Naive to call Sam Altman unlikeable.


I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).

He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).

I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.

I hope OpenAI continues to dominate even if the margins of winning tighten.


Elon is one of the most unlikable people on the planet, so I wouldn't consider him much of a bar.


It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.

Now I have him muted on X.


Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.

Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.


Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.

There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"


The watershed moment for me was when he pretended to be a top tier gamer on Path of Exile. Anyone in the know saw right through it, and honestly makes me wonder if we just spotted this behavior because it's "our turf", but actually he and people like him just operate this way in absolutely everything they do


Yeah, Putin is probably the worst billionaire. Elon might be a close second though, or maybe it's a US politician if they actually are a billionaire.


Peter Thiel who thinks the Pope or Greta Thunberg might be the antichrist, and that freedom is incompatible with democracy

https://www.nationalmemo.com/peter-thiel-antichrist


I think you did not understand his argument. He said it is a great danger that people might unite behind an antichrist like figure.


Exactly, other billionaires having calmer personality types does not make them less nuts.


> Now I have him muted on X.

Props to him for letting people mute him on his own platform. The issue with Sam and OpenAI is they their bias on any controversional topic can't be switched off.


But you're still on Twitter and calling it X...


So? I bet you think you're clever. You're using platforms daily that are ran by insane people. Don't forget that the internet itself was a military invention.


Hah, you beat me to it, serves me right for writing longer comments. Have an upvote ;)


Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.


That Dyson sphere interview should've been a wake up call for the OpenAI faithful.


I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.


Exactly. Thanks for getting it, it is refreshing to encounter people who get it. Good luck with everything!


You too!


He's definitely not. If Altman. Is a "typical" SF/SV tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.


Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.


Darío unsettles me the most, he kinda reminds me of SBF, I wouldn’t be surprised if, well they’re all bad its to stack rank them.


I don't think he's good, but afaik he isn't trying to make everyone psychologically dependent on Claude and releasing sex bots.


He and SBF are both big into effective altruism, and SBF gave Anthropic their seed funding, so yeah, that checks out.


There's nothing wrong with effective altruism -- making money to give it away -- it's SBF.


Of course there is. The whole thing is a cult, designed to pull in suckers.


Your argument is guilt by association. Association with something that isn't morally wrong, it's just a way to try to spend money on charity in an effective way? You can take a lot of ideas too far and end up with a bad result of course.


There’s 4 though, where does Demis fit in the stack rank?


TBH, I hadn't heard of him until now. Looks like he's had a crazy legit professional career. I'd put him at the top for his work at Bullfrog alone.


Demis is the reason Google is afloat with a good shot at winning the whole race. The issue currently is he isn’t willing to become the alphabet CEO. IMHO he’ll need to for the final legs.


I’d hate the job too. It would be interesting to see how Google might evolve with him at the helm, for sure.


Pfft. Dario has been making nonsense fear mongering that never comes true.


> I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.

If you nail the bar to the floor, then sure, you can pass over it.

> He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.

I don't now what your definition of extreme is but by mine he's pretty extreme.

> I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.

All of them suffer from thinking their money makes them somehow better.

> I hope OpenAI continues to dominate even if the margins of winning tighten.

I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.


> All of them suffer from thinking their money makes them somehow better.

Let's assume they think they're better than others.

What makes you think that they think it's because of their money, as opposed to, say, because of their success at growing their products and businesses to the top of their field?


Even if it's success rather than money, you still have survivorship bias to contend with, so it's not really much of a helpful distinction.


Because they wouldn't talk about money as much or try to convert a non-profit into a for profit company.


Do they talk about money that much? 99.99% of the people I see talking about money (especially other people's money and what they should be doing with it) are non-billionaires.


That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.

I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.

I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.

Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.


some people are so determined to be positive about AI that at some point it just comes across like they’re getting paid to be


There are quite a lot of posts like that. Just a bit too eager. Proselytising as if AI is a religion.


Mods tolerate it for some reason I suppose.


I don't think I did that at all, and I call out that sort of bullshit all the time and get downvoted lol (idgaf :P)


Maybe some/many even are? For "AI" companies it's not really a big expense in comparison and they depend hugely on keeping the hype going.


It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.


When fortune 500, 100 and 50 organizations are buying AI coding tools at scale (I know from personal exp), then I would say you're late. So yes. Late stage adoption for this wave.


I run a company that services 1,000+ clients on Slack, another 300+ on Teams, and a < 100 on Email/Gchat

I wouldn't wish Teams on my worst enemy, so in that regard, I love Slack

The thing I struggle with the most is how I'd move all of our core functionality from Slack. A lot of the people/teams that build these "Slack killers" I don't think have ever run Slack at scale

How are you going to replace the 30+ in-house apps I've built that automate 50+ workflows?

How are you going to replace the 100+ workflows I use with 1,000+ clients when they have to submit a ticket, or questionnaire, or a security event?

How are you going to replace the 100+ partner channels I have where we send out automated messages about specials and discounts we're running?

What about the 500+ other apps I run that integrate with our systems? Are they going to support your new platform?

Do you have retention settings? DLP? How granular can I go on permissions? What about picking up events via the API so I can train people in real time on what not to do in public channels?

I have no affinity or personal ties to Slack. But if you're going to position yourself as a Slack competitor you have to actually do what Slack does


Feels like the more important question is how are you going to do all these things when Slack cuts you off, or there is some new Slack policy that prevents it, or they increase their pricing by 1000%

Haven’t you basically built your entire business on this singular proprietary platform they you have almost no control over?


> Feels like the more important question is how are you going to do all these things when Slack cuts you off

I pay Slack $50k/year. They have no reason to shut me off.

> or there is some new Slack policy that prevents it

Prevents what exactly? The new API pricing they introduced doesn't apply to internal apps. I suppose they could apply it to internal apps. We'd have to figure out a path around it

> or they increase their pricing by 1000%

1000% increase in pricing seems incredibly unlikely. That would not only disrupt thousands of companies but would likely kill Slack entirely

---

> Haven’t you basically built your entire business on this singular proprietary platform they you have almost no control over?

Not really. We service clients through Slack. Could we switch? Sure. Would it be a pain? Yeah. Would it be costly? Yeah.

But there's also no reason to switch. And if a new platform comes out (like the one this thread is about), I would expect them to have the features to compete with Slack if they are posiitioning themselves as a Slack competitor


> I pay Slack $50k/year. They have no reason to shut me off.

They don't have to shut you off - but they've got every reason to raise the price.

If they can bully you onto a $15/user/month 'Business Plus' plan, your 1000 clients would cost you $180,000 a year.


Every third party you contract with can pull the rug from under you this way, even this new startup with its 'forever free tier'.

You plan for it as a potential risk just like anything else and, if the time comes, you can work on migrating out. Companies will off board third parties all the time if the financials don't add up.


> I pay Slack $50k/year. They have no reason to shut me off.

Until they get bought by Broadcom and deem you too small to waste time on.


Worse.. Slack is owned by Salesforce


$50k a year? Those are rookie numbers. You're actually fine, as a small fish going belly up isn't the end of the world. You can start a new business. For some big tech companies this is potentially near existential. I would know.


Ok, but what stops same from happening with any other solution? There are two things that would "fix" it:

* Fully open and interoperable protocol: We had it (XMPP), it was flawed, but at one beautiful moment in time it worked and using same protocol I could contact both google and facebook contacts. Then the companies decided "no, we would prefer to keep a walled gardens rather than make it easy to move to competition.

* Fully open source (no open core nonsense, latest Mattermost rugpull from OSS part users being one example why) chat platform with corporate backing and SaaS option - there is Matrix but afaik it is lacking feature-wise, tho I havent used it much. With plugin app store so it is possible to make and even sell integrations with other systems.

Second option seems more viable but it takes a lot of effort to make something as good as Slack or Discord


> Haven’t you basically built your entire business on this singular proprietary platform they you have almost no control over?

Would adopting the OP put you in a different position?


I didn’t read the full site but it seems they’re not really going for those users?

Anyone who has dozens of custom workflows and apps in their Slack is probably spending 10s of thousands of dollars on Slack. It is probably vital to their business.

This seems like it’s for small teams (like 3-5 people even, collaborating daily) who get rekt really fast before they’re forced to spend $60 a month.


[dead]


i have a side gig with 3 other people, we use slack chat for daily coms and webhooks. we meet weekly over discord becuase huddles are a paid feature, are you planning on implemening voip?

after 4 years we're almost at the point where i feel its worth spending $ for different types of convinient features.


How are you doing search?

Can you discuss the tech stack choices? (ie. OpenSearch, something else?)


A lot of things people build with slack could be done with email but it is seen as old fashioned.



This is cool


Like a lot of other apps that go viral, it's a novelty that wears off in a few weeks (if that). There's nothing that keeps people on the platform. How many goofy AI generated videos does someone actually want to make? 10? 20? After you've done that, what else is entertaining or amusing? Any _really_ good Sora videos are posted on TikTok or Reels anyway. There's nothing tying you to the platform.


> Salaries for developers are well under $150k in most of the United States, for example, and that is for senior engineers

As someone who has hired hundreds of SWEs over the last 12 years from 20+ states, I have to disagree.

$150k is on the lower end for base for a Sr. SWE, and well below the total comp someone would expect. You can make the argument that $150k base is reasonable, but even Sr. SWE in the middle of the country are looking for closer to $180k -$200k OTE.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: