Virtual machines are the wrong abstraction. Anyone who has worked with startups knows that average developers cannot produce secure code. If average developers are incapable of producing secure code, why would average non-technical vibe-coders be able to? They don't know what questions to ask. There's no way vibe coders can produce secure backend software with or without AI. The average software that AI is trained on is insecure. If the LLM sees a massive pile of fugly vibe-coded spaghetti and you tell it "Make it secure please", it will turn into a game of Whac-a-Mole. Patch a vulnerability and two new ones appear. IMO, the right solution is to not allow vibe-coders to access the backend. It is beyond their capabilities to keep it secure, reliable and scalable, so don't make it their responsibility. I refuse to operate a platform where a non-technical user is "empowered" to build their own backend from scratch. It's too easy to blame the user for building insecure software. But IMO, as a platform provider, if you know that your target users don't have the capability to produce secure software, it's your fault; you're selling them footguns.
Sounds like they got sick of if after working on it so much. It's really frustrating and tiring when you get everything essentially right but it doesn't work out because of slightly off timing or for some absurdly complex set of reasons that you can never pin down. That's the recipe for burnout.
(1) Tried to bring back the old symbolic AI in 2010s on my own account but people kept knocking down my door because they needed help with one or another neural net. I got an autoencoder in front of customers as part of a highly successful product.
(2) Worked for a startup trying to teach RNNs to read clinical notes; people typecast me as the idealist but I would have preferred the cynical business plan of a product for medical offices to "rebill" insurance to maximize revenue, like the value is clear and nobody dies if it screws up.
(3) Worked at another startup that was training CNNs to read all sorts of documents and datasets you see in corporate environments. That summer I had a methodology I called "predictive evaluation" and a sheaf full of notes that proved that variations of the system we had wasn't really going to work (but they did get it to work enough for one at least one customer) and there was that meeting when we talked about BERT and I said "that seems to avoid all my objections" but the team was through with developing new models and my methodology would have underestimated what BERT could do because it didn't give credit for getting the right answer by the wrong method! Turned out transformers also fixed problems those RNNs had too!
I don't mind AI. What I don't like is the complete saturation of communication channels with the same mainstream ideas and products over and over. This started long before AI slop.
Yep. Then you run into the issue of where to store the secret encryption key.
Security researchers always need to give an answer whenever there's a security incident and the answer can never be "too much centralization risk" even when that is the only reasonable answer. You can't remove centralization risk.
IMO, the future is; every major centralized platform will be insecure in perpetuity and nothing can be done about it.
My project https://github.com/socketCluster/socketcluster has been accumulating stars slowly but steadily over about 13 years. Now it has over 6k stars but it doesn't seem to mean much nowadays as a metric. It sucks having put in the effort and seeing it get lost in a sea of scams and seeing people doubting my project's own authenticity.
It does feel like everything is a scam nowadays though. All the numbers seem fake; whether it's number of users, number of likes, number of stars, amount of money, number of re-tweets, number of shares issued, market cap... Maybe it's time we focus on qualitative metrics instead?
That’s okay. I’m there with you too with about the same cumulative.
I measure my own projects by the enjoyment I got out of them. No sense in chasing validation from others when ones only metric will forever be what’s in their own control.
We need Universal Basic Income UBI and we have the right to demand it:
- LLMs trained on OUR copyrighted works and OUR open source code which was licensed for human use (MIT license explicitly says for "Persons").
- A monetary system that has been centralizing opportunities and creating an asymmetric playing field due to the Cantillon Effect caused by government and institutional money creation.
Either of these points on its own entitles us to as much UBI money as we need.
I think even without AI or any technological progress, the monetary system is itself enough to create the kind of massive centralization that we've been seeing. People have been saying that for years before LLMs. People are now blaming AI for the fact that some people can't get jobs but it's not the root cause.
Software devs won't be able to get jobs as plumbers either because the plumbing sector in many countries has become insanely regulated... Society has been fundamentally corrupted.
I only see two ways forwards;
- Communism with UBI (closer to what we have now)
- Abolish all regulations and have Capitalism again.
SaaS needs to be reinvented. We need backend platforms which provide more security controls, more flexibility in terms of data-sharing, seamless access by AI agents with advanced access controls; e.g. some agents can define schemas, some agents read data, other agents write data, some agents curate data... And custom app frontends can be generated on demand and integrate data from many different sources. This is what I've been working towards with https://saasufy.com/
I tried my hand at coding with multiple agents at the same time recently. I had to add related logic to 4 different repos. Basically an action would traverse all of them, one by one, carrying some data. I decided to implement the change in all of them at the same time with 4 Claude Code instances and it worked the first time.
It's crazy how good coding agents have become. Sometimes I barely even need to read the code because it's so reliable and I've developed a kind of sense for when I can trust it.
It boggles my mind how accurate it is when you give it the full necessary context. It's more accurate than any living being could possibly be. It's like it's pulling the optimal code directly from the fabric of the universe.
It's kind of scary to think that there might be AI as capable as this applied to things besides next token prediction... Such AI could probably exert an extreme degree of control over society and over individual minds.
I understand why people think we live in a simulation. It feels like the capability is there.
Hmmm. Have you used Claude Code for coding? I'm not saying it's always accurate but for a lot of coding tasks, it's insanely accurate. It's like mind reading.
Like for complex bugs in messy projects, it can get stuck and waste thousands of tokens but if your code is clean and you're just building out features. It's basically bug free, first shot. The bugs are more like missing edge cases but it can fix those quickly.
The most versatile and secure no-code backend platform ever created for building complex web apps. The original goal was to bring junior devs on par with top senior devs in terms of application architecture. I've been trying to create a dev experience that avoids any kind of abstract technical hurdles and makes everything as light, declarative and scalable as possible. Pivoted for AI; which is even better at using it than a junior dev. I started building this project piece by piece 15 years ago.
It feels like short-term thinking has been trained into LLMs.
They're good at solving well-defined puzzles under time constraints. It's interesting because that was the benchmark for hiring software engineers at big tech. The tech interview was and still is about fast puzzle-solving. Nothing about experience, architecture or system design in there... I suspect that's why it has a bias towards creating hacks instead of addressing the root cause.
reply