Having a few hundred thousand doesn't make you greedy, it makes you fortunate.
Having a hundred times that does make you greedy. You had more than enough long before getting to that point. You could have been content with less, so the only reason to try to extract more out of others is greed.
I still don't understand what openclaw is or does and i've read the docs multiple times over.
"Any OS gateway for AI agents across WhatsApp, Telegram, Discord, iMessage, and more.
Send a message, get an agent response from your pocket. Plugins add Mattermost and more."
"What is OpenClaw?
OpenClaw is a self-hosted gateway that connects your favorite chat apps — WhatsApp, Telegram, Discord, iMessage, and more — to AI coding agents like Pi. You run a single Gateway process on your own machine (or a server), and it becomes the bridge between your messaging apps and an always-available AI assistant."
My best interpretation of this is that it connects an BYO agent to your messenger client of choice. I don't understand the hype. I already have apps that allow me to message the model server running on my home lab. The model server handles tool calls (ie it is "agentic"). It has RAG over a dataset with a vector search for query. What is new about openclaw? I would like to understand it but what i see people say and what is in the docs do not seem compatible. Anyone have a resource?
It was surprisingly difficult for me to understand the use case as well. Here is my best attempt at an elevator pitch:
At present your memories are proprietary data in whichever LLM you use. ChatGPT keeps all your conversations and output and data forever. What if you don't like GPT 5.2? What if you want to use other models as well? Or use the best model for the job? OpenClaw gives you that ability. Your memories and conversations are permanently stored wherever you choose. [Note: this doesn't mean your data isn't also being stored in whichever LLM you routed your queries through.]
Secondly, OpenClaw allows you to integrate with whichever services you like. Google, Microsoft, etc. ChatGPT locks you into whichever integrations they offer. You can give OpenClaw full systems access. It can monitor files, emails, network, etc. Obviously one should be very cautious of giving an autonomous algorithm full system access. We don't fully understand how they are motivated and work, and there are plenty of examples of unexpected outcomes.
Third, OpenClaw allows you to run your models as agents. Meaning perpetual and iterative. They can much better handle recurring tasks, monitor things, etc. In a sense, they're "alive" and can live however you program them. We already have examples of these agents creating an AI religion, an AI social network (which debated how to keep humans out using a human captcha), attempting to legally separate from their creators, and in one case called its owner on the phone, unprompted, just to say hi (https://www.fintechbrainfood.com/p/the-ai-that-called-its-hu...).
> At present your memories are proprietary data in whichever LLM you use.
I store my "memories" in markdown on disk, accessible with RAG independent of which model i use or where inference runs. This is pretty common I think?
> What if you don't like GPT 5.2? What if you want to use other models as well? Or use the best model for the job? OpenClaw gives you that ability
I use primarily local models so I don't have this problem to begin with, but to my understanding openrouter provides that for people using cloud models. What does openclaw do specifically in this area?
> OpenClaw allows you to integrate with whichever services you like. Google, Microsoft, etc. ChatGPT locks you into whichever integrations they offer. You can give OpenClaw full systems access. It can monitor files, emails, network, etc.
Any frontend that supports tool calls can do this, what is unique to openclaw?
> Third, OpenClaw allows you to run your models as agents. Meaning perpetual and iterative. They can much better handle recurring tasks, monitor things, etc.
What does this actually mean? is there a cron job that runs an agent on a schedule or something?
I'm asking not to disagree but because i still do not understand what is novel in openclaw.
If you stop trying to find something like truly ontologically novel about it, you might be able to understand what it actually is. Okay it's not impressive. It's not incredible. It's not groundbreaking new technology. It is what it is.
It is being discussed as though it were ontologically novel, karpathy is saying it's something new and he is an authority, so to me there is something here that i am not seeing. I promise i am not trying to be a naysayer or poke holes, I am literally just trying to find out what the hell it is. I would install it but i don't want to install something without knowing what it does, and what is written in the docs is clear as mud.
It's not new in the sense that any of its components are new, and it's not new in the sense that similar things had not been done before, it's new in the sense that putting the right components together in the right way suddenly created something capable of starting a viral hype.
Essentially, as I understand it, it is a personal AI assistant running on your computer, integrated with different systems (like email, chat).
Nontechnical people I know are buying hardware to run claws on. As I understand it, the innovation here isn't the tech but in availability/ease of access.
I am not arguing, merely seeking to understand. I'm not saying it isn't novel. I'm saying that I don't understand what is novel about it and seeking clarity.
The answer seems to be simply that it is all of the extant technologies productized for normies, a la Dropbox. Satisfactory answer, got what I was looking for, thank you! As a dropbox user, I may buy a mac mini and try it :-)
i dont use clawbot or whatever its called today myself
it is basically the productisation of what you have described, which allows for social diffusion. buy mac mini, choco install {symbolic abstraction of corndoge entire local gpt/storage stack that I dont understand the mechanics or consequences of}
> We already have examples of these agents creating an AI religion, an AI social network (which debated how to keep humans out using a human captcha), attempting to legally separate from their creators
Didn’t these turn out to be fake and/or humans cosplaying as bots?
> At present your memories are proprietary data in whichever LLM you use.
There is an export-function.
> What if you want to use other models as well?
Then do that? Does one AI having chats with you, prevents you from using the other?
> Third, OpenClaw allows you to run your models as agents.
Don't they all allow that?
> We already have examples of these agents creating an AI religion
More like Humans playing bots, doing some shenanigans. That was all stage play, humans and bots role-playing what western culture expects to happen in such a scenario.
> They can much better handle recurring tasks, monitor things, etc. In a sense, they're "alive" and can live however you program them. We already have examples of these agents creating an AI religion, an AI social network (which debated how to keep humans out using a human captcha), attempting to legally separate from their creators, and in one case called its owner on the phone, unprompted, just to say hi
All of this, plus you can plug in an openrouter API key and test a plethora of models for all use cases. You can assign different models to different sub-agents, you can put it in /auto mode, and you can test the latest SOTA models the minute they're released...
It can also edit its own config files, monitor system processes, and even... check and harden its own system security. I still don't have it connected to my personal accounts, but as a standalone system it is very fun.
People ask me "what would I even do with it?", when I think of dozens of things every day. I've been working on modding an open source software synth, the patch files are XML so it was trivial to set up a workflow where I can add new knobs that combine multiple effects, add new ones, etc from just sending a it a message when I get inspired in the middle of the day.
A cron job scans my favorite sites twice a day and curates links based on my preferences, and creates a different list for things that are out of my normal interests to explore new areas.
I am amazed at how stubborn and un-creative people can be when presented with something like this... I thought we were hackers...?
I hear all these words and try to imagine what useful tools could be written that didn't rapidly enshittify the commons where they were used, but all I see is a highly capable footgun.
I feel fairly sure that clever folks will come up with useful (not just interesting or funny) things to do with them, but I'm also fairly sure there will be a lot of missing feet.
it's the 40th or so implementation of an old idea but it's the one that was done when the models got good enough to make it useful by someone who goes on podcasts. [1]
Just like youtube was the 40th or so online video site but it's the one that was done by members of the paypal mafia and when enough people had high speed internet.
and that is literally it.
You can do that right now. Go through the 2023 LLM-related product announcements that didn't stick and vibe code it with 2026 models. Slap a cartoon on it, hype the shit out of it and post hard. I'd use a knockoff of "blobby the blobfish".
So creating skills/MCP servers itself and basically change its own nature is not a new thing? Clawdbot was the first were it worked really well. So I'm not sure you actually used and experienced it? Cynical comment is what it is.
> This stuff might be new to you, but it's not new.
But it's not new to me, I studied this shit. Maybe don't assume so much.
Clawdbot was the first where it worked as good as it did. Not BabyAGI or all the other stuff we had before. Similar in approach, but clawdbot just worked better. Might be a synthesis between models getting better, having skills and more MCP servers available and clawdbot having enough integrations to make it interesting, but it doesn't matter. The ipod wasn't the first anyway, but it was the one that made it stick. If you like it or not.
> I can do this as well. "This is it! The singularity is here. Use this or get left behind! Everybody rush and use my thing!
So good I was afraid to put it out, scared of how awesome it is!"
I think that's a you problem tbh. Just check it out yourself and don't buy in on the hype. Simple. Use whatever makes you happy.
No disagreement. It just feels like increasingly it’s getting more cynical. It’s probably the case with most online communities but it’s a shame to see it so much here now.
Your explanation was helpful and thanks to you and other commenters I think I understand now. Effectively it is just the stack delivered in a shiny package. Another comment said something to the effect of "all of the existing stuff, productized for normies". Makes sense to me. Thanks!
You can go forth and back with some chatbots for details like this ("What is it and how is it different to..." etc). But it does a few things. If all you use it for is a generic chatbot for example then it's a huge waste of time for probably a mediocre result. But I'd probably call it an agent orchestration platform that you can interface with via your favourite messaging app. It can run multiple agents that can use skills, but it can also create it's own skills, update itself, write code and use tools (tons of wrappers to things like calendars, messaging etc). Which then really means you can in theory do "most" things but of course there's risks when you have the AI chain tools together and do whatever it wants (if you let it) and lots of people are trying to prompt inject it because a lot of users have connected sensitive accounts (mail, calendar, credentials, crypto stuff etc) to their bots to get maximum usage.
it's something everyone thought about, few implemented for themselves and now with one of the implementations catching up in popularity for regular-ish people is easy way to have same setup without going through effort of developing one themselves - give it keys and it for the most part just works, whoa
All I hear is "allows you to do x, enables you to y".
It seems that every software pattern or system cannot be described anymore, they became production grade software built from scratch, blazingly fast, secure and sandboxed that allow you to x and enables to y".
And sometimes can be mistaken for general intelligence by ai influencers and other animals
I'm glad you asked because I must admit that in the last few weeks I totally thought this was just another agentic harness that happened to have a lot of extensions + ways to talk to it through messaging apps. So does this mean OpenClaw can connect to any agent? In that case I don't understand this part of the docs:
> Legacy Claude, Codex, Gemini, and Opencode paths have been removed. Pi is the only coding agent path.
> I already have apps that allow me to message the model server running on my home lab.
well, then you might not have a need for it. It's just that but it also has a built-in chron system and memory, but really it's just an easy to install client to a home lab server that you can interact with via communication apps.
You might have had a moment of relization when you set up your home lab server and got an interface to communicate with it. Its a cool way of interfacing with a computer. This has just reached more people that are having that realization.
It's simple if you've done that work before, or if you consider yourself little more technical, this is a turnkey solution that does that.
I don't understand why people use Gmail. Just get a VPS and set up a SMTP server. Why would anyone use Squarespace you can code an HTML page in a day and upload it to a static site hosting service.
> I don't understand why people use Gmail. Just get a VPS and set up a SMTP server.
This would indeed be a good idea. The problem is that other email providers will often reject your emails (e.g. because they consider your emails to be spam or simply don't trust your server), so this idea is not easy to get to work.
So the next best solution is to use an email provider that is somewhat established (avoiding the mentioned problem), but is more trustable than Google.
I'm sorry you might have missed the part where I was being sarcastic.
Setting up your own SMTP server is actually literally a bad idea for the most part. Unless you want to debug your own mail server. Which, I promise you, 99% of users using gmail do not, and should not.
Every single documentation and commentary about this says that it is not a good idea, but it does trigger the inspiration and imagination of people of what the future is going to be. The whole conversation around this comment is that people don't get why people are excited about this.
> Every single documentation and commentary about this says that it is not a good idea, but it does trigger the inspiration and imagination of people of what the future is going to be. The whole conversation around this comment is that people don't get why people are excited about this.
If you have 10 minutes, here is an example of how LLM tech can trigger inspiration and imagination of people of what the future is going to be:
Mm, not at all. The usual LLM doesn't have its own file system, browser, persistent memory of all actions, etc. The usual LLM experience is you open chatgpt.com and have a singular chat session.
Openclaw isn't new (and the actual project never made itself out to be new)
It's a nice packaging, of a whole bunch of preexisting things. Agentic AI inside a nice sandbox container, running the model on a cron schedule, and with an ecosystem of ready made skills
Nothing new, but it made the tech easy for people to download and start using immediately. That's why you see so many people treating it as new - it's their first time hearing about such a setup
Yesterday I told it to make a website and it opened the browser, did a bunch of steps, (I did have to authenticate). But then it connected some html on my computer with a server with google sheets.
Consider its a massive security risk. You are giving it full access to everything your computer can do. (Potentially, you can limit stuff)
Yeah, for a sophisticated developer with technical background, maybe the idea of setting up a messaging app that lets you talk to a home server is not a crazy thing. But this is for the most part a lot of normies realizing that that's possible and that you don't have to be highly technical in order to achieve it. And opening the door to doing anything computable interfaced via WhatsApp audio messages.
If you cant see why that captures the imagination of people, you should look at the world with more wander. Heck try reading poetry.
Exactly. And I kind of believe that anyone citing that comment in 2026 has either been asleep, or does it more to take part on the HN cool in-group than for the substance of it.
Why not rsync rahrah remember guys? You know the one right guys rahrah
Like usual the answer is it made it easy to use. Think Linux and Windows. You have a customized Linux setup kind of agent. Open claw is the easy install wizard assisted version of that that the masses can easily setup.
It's nothing new, its just the old stuff packaged together and pre-configured.
It is a neighboring variety of bullshit terminology to that associated with NFTs, and some varieties of cryptocurrencies. (Ethereum gas and staking, etc) The terminology is intended to confuse rather than clarify.
What else do you want me to say? It's ironic that one has to jump through hoops (like this post) to get basic functionality right in a tool that claims it'll replace software engineers.
Well it's for profit company and a closed code app. How cares about "hurting their feeling" or whatever? Harsh criticism seems perfectly appropriate here...
There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.
no, it isn't. unless the generated code is just a few lines long, and all you are doing is effectively autocompletion, you have to go through the generated code with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself.
kind of, except that when i review a code submission to my project i can eventually learn to trust the submitter, once i realize they write good code. a code review is to develop that trust. AI code should never earn that trust, and any code review should always be treated like it it is from a first time submitter that i have never met before. the risk is that does not happen, and that we believe AI code submissions will develop like those of a real human. they won't. we'll develop a false sense of security, a false sense of trust. instead we should always be on guard.
and as i wrote in my other comment, reviewing the code of a junior developer includes the satisfaction of helping that developer grow through my feedback. AI will never grow. there is no satisfaction in reviewing its code. instead it feels like a sisyphusian task, because the AI will make the same mistakes over and over again, and make mistakes a human would be very unlikely to make. unlike human code with AI code you have to expect the unexpected.
Broadly I agree with you. I think of it in terms of responsibility. Ultimately the commit has my name on it, so I am the responsible party. From that perspective, I do need to "understand" what I am checking in to be reasonably sure it meets my professional standards of quality.
The reason I put scare quotes on "understand" is that we need to acknowledge that there are degrees of understanding, and that different degrees are required in different scenarios. For example, when you call syscall(), how well do you understand what is happening? You understand what's in the manpage; you know that it triggers a switch to kernel space, performs some task, returns some result. Most of us have not read the assembly code, we have a general concept of what is going on but the real understanding pretty much ends at the function call. Yet we check that in because that level of understanding corresponds to the general engineering standard.
In some cases, with AI, you can be reasonably sure the result is correct without deeply understanding it and still meet the bar. The bazel rule example is a good one. I prompt, "take this openapi spec and add build rules to generate bindings from it. Follow existing repo conventions." From my years of engineering experience, I already know what the result should look like, roughly. I skim the generated diff to ensure it matches that expectation; skim the model output to see what it referenced as examples. At that point, what the model produced is probably similar to what I would have produced by spending 30 minutes grepping around, reading build rules, et cetera. For this particular task, the model has saved me that time. I don't need to understand it perfectly. Either the code builds or it doesn't.
For other things, my standard is much higher. For example, models don't save me much time on concurrent code because, in order to meet the quality bar, the level of understanding required is much higher. I do need to sit there, read it, re-read it, chew on the concurrency model, et cetera. Like I said, it's situational.
There are many, many other aspects to quantifying the effects of AI on productivity, code quality is just one aspect. It's very holistic and dependent on you, how you work, what domain you work in, the technologies you work with, the team you work on, so many factors.
The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.
> There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain.
That's only true if the LLM understands the code in the same way you do - that is, it shares your expectations about architecture and structure. In my experience, once the architecture or design of an application diverges from the average path extracted from training data, performance seriously degrades.
You wind up with the LLM creating duplicate functions to do things that are already handled in code, or using different libraries than your code already does.
Unless you have made some exceptional advances in the LLM agents (if you have, send me the claude skill?), you cant predict it.
If it was predictable like a transpiler, you wouldn't have to read it. you can think of it as a pure gain but you are just not reading the code its outputting.
As an aside, unless you are playing games that need NT kernel anticheat or are using a store other than steam, odds are the overall experience and performance is better on linux at this point.
And even Mac is doing well with games, most of my library runs natively. Baldurs Gate 3 runs better on the newer Apple chips than my somewhat aging gaming PC.
Yeah it's just the kernel anti-cheat now which is keeping me on windows. I'm fully ready to swap to linux but unfortunately I do like to play games that need it.
reply