People have been saying this since the 80s. Reality is that without open source, this industry would be tiny compared to what it is. So many times open source has enabled an entire sub industry (i.e. ISPs in the 90s, Database, SaaS in the 2010s, now AI). And most of it is someone solving a problem that was worth solving for their own use, and for whatever reason made no sense to commercialize by selling licenses.
> on the backs of ten thousands of now-burnt-out maintainers.
Money isn't the motivation for most "free" open source. If it was, the authors would release as commercial software and maybe as "source available". That someone can use open source to build businesses has been the engine for the entire industry. In other words, the thought that maintainers quitting maintaining is some problem that can be fixed if we only paid them is non-sequitur. A lot of it is that people age out, get bored with their project, or simply want to do something else. Not accepting money for maintaining open source is a good way to ensure it stays something you can walk away from and something where the people attached to the money have zero leverage.
I do think that a lot of maintainers struggle with pushy and sometimes nasty people that take the fun out of what is a "labor of love."
> exploiting entities have never shared substantial or equitable profits back.
If I want to make money, I sell commercial software, SaaS or PaaS.
> they must compensate the creators proportional to the library's footprint in their codebase and/or its execution during daily operations
One of the more interesting uses of open source is to level the playing field. For example, there was a time when database was silly expensive. Several open source products emerged that never would have been viable commercially without the long term promise of "free" and the assurance of having source code. To have a license with a cost bomb on it would just ensure that people would use another choice.
I'm not following the file disclosures in particular detail; I haven't yet heard any disclosures from the files that are evidence of kompromat. What are you thinking of?
OK, so hearsay from Epstein. Which could easily be reliable. And could also not be. Also unclear precisely what "dirty" means. So sure, it adds to the pile, but we're not there yet, IMO.
Notice how this information is a publically accessible web server?
This is quite unusual definition of "kompromat".
Look, when it comes to Trump, at this point, there likely is no "kompromat" that would affect him. He could have raped and murdered a 5 year old at the behest of Russian intelligence services, and his MAGA base would defend it as somehow part of God's grand plan for the republic. What is there that could be revealed about him at this point that would actually change anything?
The man was found guilty of sexual assault, recorded boasting about sexually assaulting women, found guilty of real estate fraud .. and re-elected. Maybe you can imagine some sort of of "kompromat" that would actually impact him in a way that diminished his power, but I cannot.
Epstein was notorious for having compromising photos and videos of just about everyone who did anything compromising in his periphery. Are we shocked that Trumps own DOJ has not revealed a smoking gun? Do we forget that his name has been redacted by his own DOJ?
I worked on a couple of projects with state workforce development agencies and federal agencies. I was always impressed with how much focus there was on the integrity of unemployment numbers, and especially with the emphasis on making sure methodologies ensure that data from the late 1800s can be compared against modern data.
> Now companies selling LLM coding agents enter the scene, promising to eliminate their customers' dependence on the commons, and whatever minimal obligations they had to support it.
This is misguided. Maintenance of LLM code has a far greater cost than generating it.
> They prefer a future where computer programs are purchased by the token from model providers to one where they might have to unintentionally help out a competitor.
I don't think that's even a thought. The thought is that "no one can tell me no".
The longevity of code depends at least on whether it's a product or a service.
Services are what the majority of devs already work on and maintain. There's almost no incentive for anyone to use LLMs for that outside of startups. They do indeed last a long time because the code is as fundamental to the recurring revenue of the business as their legal or accounting or marketing. Devs make changes according to the evolving needs of the business, and "productivity" isn't as much of a priority as accuracy and reliability. The implementation details are very relevant to the business, especially for B2B services that need to meet compliance requirements.
Products, however, have always been disposable code written by people being thrown into a meat grinder. I don't think LLM-generated code is better, but it's probably not that much worse either.
> This is misguided. Maintenance of LLM code has a far greater cost than generating it.
I agree. I'm just observing what they're doing.
> I don't think that's even a thought. The thought is that "no one can tell me no".
I doubt there's any one thought driving things. I didn't mean to imply the existence of some grand strategy or scheme. The preference I speak of isn't of any person, it's the direction pointed at by incentives and circumstance. Companies will make decisions to steer clear of helping competitors. Separately, they signal great interest in replacing costs spent on labor with costs spent on services. See the transition to cloud. The result is the preference of a world where code is like gasoline, purchased from a handful of suppliers for metered cost.
I of course cannot say what the future holds, but current frontier models are - in my experience - nowhere near good enough for such autonomy.
Even with other agents reviewing the code, good test coverage, etc., both smaller - and every now and then larger - mistakes make their way through, and the existence of such mistakes in the codebase tend to accellerate even more of them.
It for sure depends on many factors, but I have seen enough to feel confident that we are not there yet.
You have 2 paths - code tests and AI review which is just vibe test of LGTM kind, should be using both in tandem, code testing is cheap to run and you can build more complex systems if you apply it well. But ultimately it is the user or usage that needs to direct testing, or pay the price for formal verification. Most of the time it is usage, time passing reveals failure modes, hindsight is 20/20.
1. They can skip impressions and go right to collect affiliate fees.
2. Yes, the ad has to be labeled or disclosed... but if some agent does it and no one sees it, is it really an ad.
Sometimes a government only cares about a few citizens, or in some cases one citizen.
reply