Hacker News .hnnew | past | comments | ask | show | jobs | submit | monkeynotes's commentslogin

Even his blog has the Claude vibe to it.

https://www.adriankrebs.ch/about

> The site is built with Astro. Design inspired by Paul Stamatiou.

https://paulstamatiou.com


What if PTSD therapy focusses on accepting things you can't control and sitting with the pain? That's how I work through anxiety, and depression. I know it will never be gone, I don't try and set expectations to live without anxiety, I just try and sit with it, and accept it.

Much of mental trauma is about acknowledging it, and learning to live with it. There is no cure for PTSD, even Ketamine is short acting, not a long term solution, and indeed Ketamine simply helps you sit with the suffering in a different light.


>> There is no cure for PTSD...

But there are treatments. Last I read exposure therapy and EMDR were the two main ones. I don't think I'd be a big fan of exposure until the reactions have been significantly reduced, but everyone is different. EMDR didn't do much for me, but Internal Family Systems did. CBT is also great for some people.


LLMs aren't trading off anything. It's not like they make a decision based on anything other than what they are guided to do in training or in the system prompt.

It's like saying Reddit trades off one comment for another, yeah - an algorithm they wrote does that.

This article seems to allude to the idea there is a ghost in the machine, and while there is a lot of emergent behavior rather than hard coded algorithms, it's not like the LLM has an opinion, or some sort of psychology/personality based values.

They could change the system prompt, bias some training, and have completely different outcomes.


Hardly, they are burning money with TikSlop, they don't even know how to monetize it, just YOLO'd the product to keep investors interested.

Even the porn industry can't seem to monetize AI, so I doubt OpenAI who knows jack shit about this space will be able to.

Fact is generative AI is stupidly expensive to run, and I can't see mass adoption at subscription prices that actually allow them to break even.

I'm sure folks have seen the commentary on the cost of all this infrastructure. How can an LLM business model possibly pay for a nuclear power station, let alone the ongoing overheads of the rest of the infrastructure? The whole thing just seems like total fantasy.

I don't even think they believe they are going to reach AGI, and even if they did, and if companies did start hiring AI agents instead of humans, then what? If consumers are out of work, who the hell is going to keep the economy going?

I just don't understand how smart people think this is going to work out at all.


> I just don't understand how smart people think this is going to work out at all.

The previous couple of crops of smart people grew up in a world that could still easily be improved, and they set about doing just that. The current crop of smart people grew up in a world with a very large number of people and they want a bigger slice of it. There are only a couple of solutions to that and it's pretty clear to me which way they've picked.

They don't need to 'keep the economy running' for that much longer to get their way.


> I just don't understand how smart people think this is going to work out at all.

Thats the thing, they arent looking at the big picture or long term. They are looking to get a slice of the pie after seeing companies like Tesla and Uber milk the market for billions. In a market where everything from shelter to food is blowing up in cost, people struggle to provide/have a life similar to their parents.


How can you take the market for billions when you are investing hundreds and hundreds of billions? Amazon overtook Walmart and cloud computing, they have a solid business model, and I doubt even a business that size could pay down that outlay. Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?


> Are we really saying that by some miracle OpenAI, or Anthropic are going to find a use case that would make places like Amazon and Apple look like relatively small business?

I thought the replacement of all desk jobs was supposed to be that joking not joking usecase


> If consumers are out of work, who the hell is going to keep the economy going?

There is a whole field of research called post scarcity economy. https://en.wikipedia.org/wiki/Post-scarcity

tldr; it's not as bad as you think, but the transition is going to be bad (for some of us).


> for some of us

I've read that before:

“Many men of course became extremely rich, but this was perfectly natural and nothing to be ashamed of because no one was really poor – at least no one worth speaking of.”


Douglas Adams, The Ultimate Hitchhiker’s Guide to the Galaxy, about the custom planet industry

https://www.goodreads.com/quotes/437536-many-men-of-course-b...


It can only be "not as bad as you think" if the people currently at the top don't continue to hoard all the gains.

If the current system is maintained—the one where if you don't work, you don't earn money, and thus you can't pay for food, shelter, clothing, etc—then it doesn't matter how abundant our stuff is; most people won't have any access to it.

In order for society to reap the benefits of post-scarcity, we must destroy the idea that the people at the top of the corporate pyramid deserve astronomically more money than the people actually doing the work.


The planet has finite resources, least alone land. And then there is human psychology for hoarding resources.



What has AI got to do with this? It's in the headline but I don't see why.


The API could be used for non-AI use cases if you wanted to, but it’s built to be integrated with an LLM through tool calling. We provide an MCP (model context protocol, for integration in Claude, Cursor, Windsurf etc.) server.


You might have noticed that ChatGPT (and others) will sometimes run Python code to do calculations. My understanding is that this will enable the same thing in other environments, like Cursor, Continue, or aider.


Also, those code interpreters usually can't make external network requests, which is adds a lot of capabilities like pulling some data, and then analyzing it.


Ah so it could basically be „the tool“. Do you plan hooking in some vector DB as well?


This isn't about if LLMs are useful, it's about how useful can they become. We are trying to understand if there is a path forward to transformative tech, or are we just limited to a very useful tool.

It's a valid conversation after ~3 years of anticipating the world to be disrupted by this tech. So far it has not delivered.

Wikipedia did not change the world either, it's just a great tool that I use all the time

As for software, it performs ok. I give up on it most of the time if I am trying to write a whole application. You have to acquire a new skill, prompt engineering, and feverish iteration. It's a frustrating game of whack-a-mole and I find it quicker to write the code myself and just have the LLM help me with architecture ideas, bug bashing, and it's also quite good at writing tests.

I'd rather know the code intimately so I can more quickly debug it than have an LLM write it and just trust it did it well.


By the way, Wikipedia did change the world. Some of the most important inventions are the ones we don’t notice.


I was so stupid when GPT3 came out. I knew so little about token prediction, I argued with folks on here that it was capable of so many things that I now understand just aren't compatible with the tech.

Over the past couple of years of educating myself a bit, whilst I am no expert I have been anticipating a dead end. You can throw as much training at these things as you like, but all you'll get is more of the same with diminishing returns. Indeed in some research the quality of responses gets worse as you train it with more data.

I am yet to see anything transformative out of LLMs other than demos which have prompt engineers working night and day to do something impressive with. Those Sora videos took forever to put together, and cost huge amounts of compute. No one is going to make a whole production quality movie with an LLM and disrupt Hollywood.

I agree, an LLM is like an idiot savant, and whilst it's fantastic for everyone to have access to a savant, it doesn't change the world like the internet, or internal combustion engine did.

OpenAI is heading toward some difficult decisions, they either admit their consumer business model is dead and go into competing with Amazon for API business (good luck), become a research lab (give up on being a billion dollar company), or get acquired and move on.


Last thing I want is even more ways to distract myself. I want an anti-algorithm or something to permanently ban me from addictive content.


> nobody knows why

But we do know the culpability rests on the shoulders of the humans who decided the tech was ready for work.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: