I am running a bunch of autoresearch loops that optimize various compilers and its pretty easy to burn through as much money as you want if you have a measurable goal and good tests.
I have both of those, yet seemingly I guess I'm not setting my goal in such a way that it supports "endless inference" like that. My goals have eventually ends, and that's when I move on. Optimization sure sounds like something you can throw away a good amount of tokens/quotas on, so yeah.
An extreme example is now when I make interactive educational apps for my daughter, I just make Opus use plain js and html; from double pendulums to fluid simulations, works one shot. Before I had hundreds of dependencies.
Luckily with MIT licensed code I can just tell Opus to extract exactly the pieces I need and embed them, and tweaked for my usecase. So far works great for hobby projects, but hopefully in the future productions software will have no dependencies.
The problem with this is now you are solely responsible for managing all of the changes, all of the variation of life. Chrome changed the shape of this API, you are responsible for finding it and updating it. Morocco changed when their daylight savings took effect, now you need to update your date/time handling code. There are a lot of these things that we take for granted because our libraries handle it for us, and with no dependencies you have to do all the work. Not a big deal for making a double-pendulum simulator for your daughter to play with that will stop mattering next week, but is a concern for a company which is trying to build something that can run indefinitely into the future.
> you are responsible for finding it and updating it.
vs the dependency broke something and now you're responsible for working around someone else's broken code.
Honestly, I've seen much more of the latter. Especially nowadays with every single dependency thinking they are an fully fledged OS because an agent can add 1000 feature/bug in no time. Picking the right dependency maintaining by a sane maintainer is like digging potatoes in a minefield.
As a general principle, I agree with you that large companies and teams benefit from common runtimes (i.e. libraries and frameworks).
I don't buy the notion of things breaking down over time, though. For "first-party" code that sticks to HTML and CSS standards, and Stage 4 / finished ecmascript standards, the web is an absurdly stable platform.
It certainly used to be that we had to do all sorts of weird vendor hacks because nobody agreed on anything and supporting IE6 and 7 were nightmares, and blackberry's browser was awful, but those days are largely behind us unless you're doing some cutting-edge chrome-only early days proposed stuff or a browser specific extension or something else that isn't a polished standard.
Even with timezone changes, you're better off using the system's information with Intl.DateTimeFormat.
I don’t know where the fear of breaking changes in deps comes from, but most good projects tries to keep their API stable. Even with fast-evolving platforms like Android and iOS sdk.
In the Python ecosystem making software with reproducibility in mind was a thing before the advent of uv. Some earlier options include Pipenv and Poetry. I used Pipenv already some 6y ago to achieve that and later switched to Poetry.
I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.
In companies, though, you often wind up with three+ massive dependency trees in your software to handle the same problem because people went and added the new hotness without deprecating the old stuff. You also find dependencies that are much heavier than necessary for the actual task at hand because the software developer was also solving the problem of needing that dependency on their resume. And then there's just the relatively tiny dependencies for fairly solved problems, like leftpad, which don't really require deps, and you can accept the maintenance burden, because not everything is an abstraction layer over chrome.
So if you just need to do something simple like fire off a compute heavy background task and then get a result when it is done, you should probably just roll your own implementation on top of the threading API in your language. That'll probably be very stable. You don't need a massive background task orchestration framework.
People might object that the frameworks will handle edge cases that you've never thought of, but I've actually found in enterprise settings that the small custom implementations--if you actually keep it small and focused--can cover more of the edge cases. And the big frameworks often engineer their own brittle edge cases due to concerns that you just don't have.
So anyway, it isn't as simple as "dependencies are bad" or "dependencies are good", but every dependency has a cost/benefit analysis that needs to go along with it. And in an Enterprise, I'd argue that if you audit the existing dependencies you will find way too many of them that should be removed or consolidated because they were done for the speed of initial delivery and greenfielding. Eventually when you accumulate way too many of those dependencies the exposure to the supply chains, the need to keep them updated, the need to track CVEs in those deps, and the need to fix code to use updated versions of those dependencies, along with not have the direct ability to bugfix them, all combine to produce an ongoing tax of either continual maintenance or tech debt that will eventually bite you hard.
> The problem with this is now you are solely responsible for managing all of the changes
We seem to greatly overestimate the amount of code needed to do something.
For example, there are billions of lines of code from me pressing a key, to you seeing what I wrote. But if we were to make a special program that communicates via ipv6 and icmp, and it is written for hazard3 pico2350 with wiz5500 ethernet breakout, the whole thing including the c compiler to compile your code (which could very well outperform gcc -O3) will be 5-6k lines of code, including RA, and even barebones spi drivers, and a small preemptive os.
So, it is not unreasonable to manage all of those changes.
I think we are stuck with LLMs. They are already in a place where they can find these issues in the first place. They can access RSS feeds. You could cron an agent to look to see if you are pwned as frequently as you want at literally almost zero cost. When you do ingest the libraries, keep a list and of what version and that can help as well.
Yes, I trust my LLM codegen and review process far more than the code I was never going to read from all of my transitive deps and every sequential update to them forever.
This is a trivial bargain for most libraries we were using not long ago out of convenience. Like a library just for setting ansi colors for your TUI.
Ideally you have minimal deps scoped to the truly hard things: libghostty, btrfs, luks, postgres, etc. Then you focus on the application and generate the mechanical glue code on demand with a solid harness that keeps the important stuff well-tested.
Though you’ll need to figure out how to build that harness/process before it really delivers.
I am torn because I like rust over go, and rust is better from an LLM perspective. But the dependency philosophy on rust is basically a security blackhole whereas go is much better.
A portion of context and vibe protection that are required is exported to the compiler. In addition rust binaries are generally smaller both in terms of size and footprint.
I sort of agree with you but for me, I prefer golang because I believe that for most use cases, Golang fits perfectly (I run a 500mb 7$/yr vps with debian and use golang binaries)
Cross portability and compilation and its very few dependency/stdlib approach with simplicity, I just really love golang.
I had built[0] a cuckoo.org alternative at https://fossbox.cloud which has only one dependency of gorilla web sockets aside from stdlib
If I were to rewrite it in rust, I couldn't say the same. Golang's stdlib is that good.
My point is, although I understand Rust can have some advantages in other areas, the advantages of golang outweigh rust for me by a very high margin. There is also the factor that I just feel more comfortable reading golang code and picking through it than rust.
It is my opinion that you can go a very very long way with a garbage collector than people imagine even on constrained systems. Unless absolutely necessary, thinking about GC feels like it might be a premature optimization in many instances which is worth thinking about.
[0]: More like (vibecoded?) as this is just a single file main.go which I had prompted on gemini 3.1 pro sometime ago. It was just a prototype which works surprisingly well that I had made because I was using the cuckoo website with friends but it kept on lagging.
Well I almost have the same story, my agent harness is a 5mb rust binary that runs as systemd service and occupy 10mb of memory after days. This handles all communciations between 100+ agents.
Now I think go will come close to this number, so in reality, there might not be a real difference. But a leak somewhere is far more likely especially as these are mostly vibe coded (my binary has multiple functionality).
The biggest advantage that go have over rust is the stdlib and ecosystem that doesn't depend on 100 packages. And maybe that will be the deciding factor in the future or someone (I'm getting increasingly itchy for it) will need to reinvent the ecosystem to be less like npm.
You can encode so much design and intent in the Rust type system. It’s one of the best things about Rust.
I prefer to write Go if I were doing everything by hand. But now everything is Rust. And a quick scroll through my Rust types, the discriminated union types, the discriminated error types, the high level application types, it’s just so much better for spec’ing out a system and leaving no question about what some bit of code is trying to do and the states it’s trying to prohibit.
And with an LLM, the hard things about Rust that would’ve had me asking questions on IRC are not hurdles anymore.
Granted, it has its own cultural NPM/RubyGem dep spam problem when you watch cargo install’s output.
I think in the relatively near future we’re going to start seeing sophisticated supply chain attacks into language model training data.
It should be feasible to design vulnerabilities which look benign individually in training data, but when composed together in the agent plane & executed in a chain introduce an exploit.
There’s nothing technical really stopping that from existing right now. It’s just that nobody has put the effort in yet.
The develop-test-refine feedback loop for this kind of attack is so long (or expensive) that it seems likely to limit its real world use. Poison training data, wait months? a year? for the model to come out, see how well it worked, refine... or do you see a faster way to iterate?
It's a tool for building things. I can build those things equally well with or without it, maybe saving some time with it (arguable)), but I'm not dependent on it.
No, I'd posit the average developer who pulls in hundreds of deps but now uses LLMs to effectively replace them can not build things equally well without either.
Of course most devs lie to ourselves because of our ego that pulling in deps is /just/ a time-saving measure, but of course we know there are some incredibly high quality libraries and frameworks that we don't have the skills or experience to replicate to the same level
Comments like these are so incredible far fetched from reality. Are you really going to implement your own PyTorch? Why even compare your cute examples to enterprise solutions?
I personally noted that I'm starting to use some LLM idioms "it's not just .. it's .." and I don't like it. I'm actually trying to stop using computers and read books to replenish my mind with more diverse idioms.
same, I also try not to read claude's output that much, and I have a copy of Gibson's Mona Lisa and just open it while it is thinknig, for music and even for CS stuff, I search with before:2022 on youtube
but the ship has sailed :)
there is no hiding from it
of course the content we consume modifies us, but now everybody "reads" the same book, whatever they read.
I write bad, but my text editor is putting little grammar and spelling squiggly lines under everything and I click through them and end up with very AI-like text. My emails even end up with emdash in them. It’s to shrug. You don’t know if text today is completely prompted or is just cleaned up by modern grammar and spell checkers?
Sure I do. You're almost good enough, at pretending to lousy construction, to have fooled me. Use more words next time; the semiliterate invariably mistake volume for quality.
> Software is quietly becoming a probabilistic system, and almost no one is saying it out loud.
AI generated or at least heavily edited would be my guess. Although, I'm with you at this point hard to tell, I'm seeing those AI filler phrases or over use words like "here is what actually happening" more and more and not only on blog posts but social media, video content, podcasts.
reply