But that's exactly what you should be doing, technically. Human in the loop is a dead concept, you should never need to understand your code or even know what changes to make. All you should be concerned about is having the best possible harness so your LLM can do everything as efficiently as possible.
If it gets stuck, use another LLM as the debugger. If that gets stuck then use another LLM. Turtles all the way down.
Most of these self hosted solutions require opening 80/443. It would be nice if they could adopt Wireguards approach of using UDP only, and only responding if the request is valid.
Exactly so. I didn't notice that missing def when I put together the blog post, but you are right to call it out. In this case that decref was copypasta from some other code -- I don't decref on the other error returns.
The module init function is where you would normally create the module object (PyModule_Create) and decref it if an error occurs. The blog example is utility code that you would call within the module init function to add an enum.
Someone should really create a blog post compiler to catch these sorts of things :-)
It took me too long to understand the difference between the two so I'll leave it here for others. Octelium operates on OSI Layer 7 and Tailscale operates on OSI Layer 3 and 4.
this[0] page makes it seem 500~1000 cycles till 80% starting performance is common. So if you were charging it every other day from a 40~50 mile round trip commute, after 3~5 years you'd go to charging it every day.
These kind of obstacles don't work for any action that the user does repeatedly.
Dialogs that pop up and ask "Are you sure you want to delete ...?" -> users just automatically click yes, because they already did that the last 10 times and just want to get on with their work.
Logged in to server "alpha" instead of "delta" because you thought that's the right one. Tool asks you to write the server name. You type "alpha" because you know you're on alpha. Reboots the wrong server.
Github ask you to confirm the repo name before deleting by typing it into a text field. User looks at what the repo name is and types it without thinking. Or, like lazy me, mark and drag the displayed name into the field, so you don't even have to type.
The point is, users already decided to do the action when they started. It's nearly impossible to consistently make them stop and re-evaluate their action, because that's extremely high friction and annoying to the user. They quickly learn to circumvent the friction as efficiently as possible (i.e. without thinking about it).
A better solution is to just do it but let the user undo it, if it was a mistake.
>Anyone who anthropomorphizes LLM's except for convenience [...] is a fool.
I'm with you. Sadly, Scott seems to have become a true AI Believer, and I'm getting increasingly disappointed by the kinds of reasoning he comes up with.
Although, now that I think of it, I guess the turning point for me wasn't even the AI stuff, but his (IMO) abysmally lopsided treatment of the Fatma Sun Miracle.
I used to be kinda impressed by the Rationalists. Not so much anymore.
It began as a whisper—then erupted into a storm. On January 31, the BLA launched a sweeping offensive across Balochistan, striking military, police, and intelligence installations with a level of coordination that stunned observers. As the fighting raged and the province descended into chaos, one truth became impossible to ignore: the insurgency has entered a new, more dangerous phase.
My knowledge on this is the tower should be able to optimise beams without location information. Channel information can be relayed back to the tower for beam optimisation. The tower needs to know the signal path characteristics but not explicitly the location.
Not disputing that location data is used for beam optimisation just that I dont believe it is required.
I think trying to argue that the viral/algorithmic addictive short-video feeds are "social media" is a bit of a stretch.
If you banned any network that does algorithmic feeds, or that don't have good moderation, then I'd be fine with my teenagers using it. I think the problem is the mind-numbing video scrolling more than any other risk with social media.
And I don't think there's a danger to it per se, I think it's just too addictive and distracting.
In 2023 there was record harvest of potatoes in Russia. Prices dropped, so farmers stopped planting potatoes in 2024 and 2025. Wouldn't be surprised if they plant more this year due to high price.
who would have guessed; we will see more of these vibe coding disasters in the future.
still waiting for the big OpenClaw security hole, exposing remote access to anyone having this vibe coded project installed tho
> Twitter famously had a "fail whale" but it didn't stop the company from growing. If you have market demand (and I guess advertising) then you can get away with a sub-optimal product for a long time.
Agreed, but there's still an element of survivorship bias there. Plenty of companies failed as they couldn't keep up with their scaling requirements and pushed the "getting away with a sub-optimal product" for too long a time.
Gold is also one of the best heat conductors so if it got really cheap it could be used a lot in industry and electronics. Anything from cookware to heat sinks!
It's a severe limitation. It seems like interpreting only one instruction per frame is the developer's way to guarantee real-time performance. I don't want to discourage the developer from doing a hard-real-time experiment like this, but I think they should determine what is the most instructions they can interpret.
I like the idea of on-device programming. There is a DSiWare application that allows you to implement more complex (2D) games in a Basic dialect: https://en.wikipedia.org/wiki/Petit_Computer.
If it gets stuck, use another LLM as the debugger. If that gets stuck then use another LLM. Turtles all the way down.