I can feel the frustation, nothing dramatic about expressing it
This quote from the post resonated with me:
> I want to get work done and it doesn't want me to get work done. I want to ship software and it doesn't want me to ship software.
The sentiment is shared, and github is not the only service making me feel like that, it feels like everything on the web is more flimsy and low quality nowadays. Constant outages, bugs, UI papercuts, incomplete features, what in the world is going on?
I suspect it isn't even really "greed". It is just the slow mold growth of an org chart optimizing comfort for itself instead of value for customers. Generally, startups / founders are the only anti-bodies against this type of behavior.
What a weird time for our industry. On one hand, small teams have never been able to move faster than right now.
On the other, the economy and market conditions are brutal for the little guys. Incumbent behemoths hoovering up value, talent and financing.
Instead of shaking things up as usual when a major paradigm shift hits, AI has mostly been a centralizing, consolidating force. Not that I was expecting it to be otherwise, but it's certainly dismaying to witness.
Or am I being too pessimistic / glorifying the past?
It's easier than ever to make your own furniture. IKEA is bigger than ever.
It's easier than ever to publish a video game. Steam is bigger than ever.
It's easier than ever to 3D-print tractor parts. John Deere is bigger than ever.
It's easier than ever to switch to solar power. The petroleum industry is bigger than ever.
One person reverse-engineered Coca Cola, made an exact taste-alike and published the formula. You can make some at home. Coca Cola is bigger than ever.
The hidden cost to competing in these industries is insane. Its so hard to build a physical product that can compete against a giant like IKEA. You need to make some with less r&d, less automation, less infrastructure and you're going to sell less units and all that needs to be price competitive against something that is made on an production line with a team of experienced engineers and sold to millions at fine margins.
That depends, doesn't it? If I make it, it costs time instead of money. (Costs of tools are amortized over all the things I might make.) If I get it from IKEA, it costs money instead of time.
> It's easier than ever to publish a video game. Steam is bigger than ever.
In this case: these statements aren't contradictory, they're complementary. It's easy to publish a game on Steam, where the audience are and the money is. It's also easy to publish on itch.io where no money is.
I think org chart the impact is how the individual person can advance their career while doing good work. If they only get rewarded for new things, service and maintenance suffers.
By lowering cost and not investing profit to the company? Yes, short-term v long-term, but who in this world cares about anything after their next salary?
If you have the choice to sell out or not sell out, the only logical decision is to sell out, because then you'll have lots of money and one presumes the product wasn't emotionally important to you. You can then move on to making your next product.
Couldn't have said it better. Whatever else you want to build in life will be exponentially easier if you sell out first, and many builders have many things they want to build and not just one.
Focus on open protocols, simple formats over complex vendor-specific cruft. Then you can always "fork" away from an enshittified saas.
I bet a small team of the quality of the kind developers who are attracted to hacking on Ghostty could recreate the subset of GitHub functionality they actually need in ~six months. It's just the problem of how to pay for the ongoing care, maintenance and hosting? Maybe another opportunity for Mitchell's particular brand of philanthropic OSS.
DNS is the cause of all problems, but it's also the solution - just like anyone can run Apache or Nginx, so should anyone be able to run a git setup. Then it scales really well, as everyone is doing their own thing on their own domains.
Of course, you lose out on some things like ease of user access and various protections.
You don't need this. git and a local drive. git and a shared drive. git and an https engine (can be a plugin to apache/nginx, not a full github like solution). git and ssh but people use username/password.
> it feels like everything on the web is more flimsy and low quality nowadays.
Not just the web either. It feels like the whole world is in a race to throw shit together and cash out as quickly as possible: influencers, hustle culture, enshittification, etc.
My pet theory is that all of the global chaos around the climate, politics, pandemic, etc. is leading people to no longer believe in the future. Once you lose that, all that's left to care about is the right now. No one takes the time to scrimshaw the deckrails on a ship they believe is sinking.
And you, my father, there on the sad height,
Curse, bless, me now with your fierce tears, I pray.
Do not go gentle into that good night.
Rage, rage against the dying of the light.
We can't really change the tide lest we be King Cnut - but we can at least take the time and effort in the things we do to fight against entropy - bring more order and durability into our lives.
Or perhaps another adaptation:
God, grant me the serenity
to accept the enshittification I cannot change
the courage to improve the things I can
and the wisdom to know the difference.
We can; the tide changed to where it is now and can change again - and somebody will change it.
People need to stop bemoaning it, and think and do something. The enshittification is an idiotic, failing, extremely short-sighted strategy.
It's a huge opportunity - your competition has stopped investing in its product, fired its talent, treats its customers with utter contempt, and is managed by imbeciles. Who is a better target for disruption? Hire the talent, market your quality, treat your customers with respect, point out the BS your competition does every time they do it. Stop staring at your navel.
One way it's possible is if the US ditches the two party system. We are starting to see some cracks in the partition recently with the Epstein files and the Israeli genocide. People from both sides are starting to realize they share a lot of common ground.
Leaving aside the reductionism, the difference is that we are already seeing the effects of the "bad weather" and we all lived through [1] (and, to a point, are still feeling the effects of) the "flu". No "ifs" about it.
There's no need to worry about a threat that has been theorized for 70 years (and may very well never happen) when there are actual, real catastrophes happening right now.
[1] Well, except for those who didn't make it through. They are not here by definition, but their memory is still fresh.
> I can feel the frustation, nothing dramatic about expressing it
I think the "ridiculously dramatic" part is the whole love letter to GitHub, not the frustration.
And I think it is fair to say that it is ridiculously dramatic. Which is okay, of course, I'm not criticising here. Just like it would feel ridiculously dramatic (at least to me) if someone explained that they cried today when they stopped their subscription to Netflix in order to move to another service, because they love Netflix so much.
The difference here is _creative_ work vs consumption. Craftspeople like Mitchell feel passionately about the tools they rely on to build. Github has also been a social place for builders.
I don't think it's ridiculously dramatic to feel sad about great tools rusting or makerspaces closing...
Again, I am not criticising the feeling. It's okay to feel the way we feel.
I am just saying that when Mitchell mentioned it being "ridiculously dramatic", I think he was not talking about the frustration but rather about the fact that he cried about leaving GitHub.
It's okay to feel sad about something and to also feel that it's ridiculously dramatic to feel sad about it.
>The sentiment is shared, and github is not the only service making me feel like that, it feels like everything on the web is more flimsy and low quality nowadays. Constant outages, bugs, UI papercuts, incomplete features, what in the world is going on?
Have you ever tried to run anything from the 80/90s era? Segfault everywhere, "fatal error was successful", kernel panic, BSOD, screen freeze for any reason and its opposite.
Nothing serves better good all time than bad memory as they say.
Not that the gigabit of useless crap to show essentially a few ko of text is fine, but the abuses and horrors that humans commit just shifted a bit where they land, it's not like there was a time were we had a land free of human dirty stuffs.
Yes somehow, in a the sense that there are always things that we can observe as annoying when the representation of a situation where these issues are not present is easy to fantasize. But making actually disappear these annoyances is the hard part, plus the new situation have great chances to be bound to different annoyances that phantasms didn't anticipate. So the NP hard problem is being critique of our anticipations to try to avoid paths to bigger troubles, and keep steady effort on waking the path all while also paying attention to current sensory feedbacks of the situation on the road.
React gets blamed for this because the error handling is bad and the UX is confusing. But the issue with GitHub’s frontend is that the backend is dropping requests. When you click a button on GitHub and the loader gets stuck that’s because there no timeout/error handling in the JavaScript but there also no reply from the server. I feel like React is getting a bad rap because it’s visible when the issue is clearly their backend.
> React gets blamed for this because the error handling is bad and the UX is confusing
Yes, it does.
> React is getting a bad rap because it’s visible when the issue is clearly their backend.
Two things can be bad! Except that in this case one of them is unnecessarily bad, because nobody forced them to use a front end system which defaults to terrible failure handling.
It's also not tautological that React apps have bad error handling. You can do proper error handling and retry logic in React, and I can't for the life of me understand why GitHub engineers making several hundred thousand a year in cash and at least that much in stock simply... don't?
It's no wonder my jobs feed is flooded with senior engineering positions at GitHub (one wonders if they're growing, or jettisoning dead weight) but I can't imagine it's a good look for the resume to put GitHub on it at this point.
What's hilarious about that script is that the solution is so simple: use a less-than comparison instead of an equals. That's really, really all it would have taken to fix the issue. And yet https://github.com/actions/runner/pull/3157 was opened on 2024-02-17 and was merged on 2025-08-21, a full 18 months (plus a few days) later! It took literally 18 months for them to merge a bugfix that is trivially obvious to see is correct.
Yeah, the problems at GitHub ran (and still run) deep.
P.S. Yes, there are busy-wait issues in that code, which should have been addressed by bringing back the check for the `sleep` command and using it if available, falling back on the CPU-burning busy-wait only if `sleep` was unavailable. But the most revealing thing is the 18 months to merge a trivial-to-verify PR. That, more than the bad busy-wait loop, is the fundamental indicator of brokenness at GitHub under Microsoft's ownership.
This is surprising to me, I would have bet money that all the people who actively engage in this type of language/framework war discourse were all drawing Social Security by now.
There's a big difference between a war between two somewhat equivalent things that make different choices (editor wars, language wars, etc.) vs pointing out that certain things are really fundamentally ... not good. IMO we all need to be much louder and clearer about how bad things are, and how much better they could be.
This is, in fact, on topic: github actions seemed to me like a bad idea from the start, to me, but I let my co-workers and "network effects" convince me that I was being grumpy and that it was fine, and so we've adopted it. And now ... here we are. It was exactly as bad I thought it was, and it reflected a broken engineering culture.
It is certainly possible that you are brilliant and your co-workers and the industry writ large are all morons. That you were right all along, and chickens roosting and all that, though it seems at least equally as likely that this is not the case.
If you think it requires "brilliance" to figure out that Github Actions is really bad, and/or that "the industry writ large" always makes good decisions, you might be the problem!
After yesterday's outage they admitted that their elasticsearch index for issues/prs lost data.
They seem to have changed the primary source of data in the issues and pull requests tabs (w/o filters applied) from the underlying database to the elasticsearch search index, which has the side effect that there's a noticeable delay between state change of an issue/pr and an update in the UI. But as seen today, these can get out of sync, and apparently they even had data loss in the index.
I would really like to know their reasoning for making that change. I can totally imagine that they wanted to "simplify" so the UI uses only a single data source instead of two.
As a user it's incredibly annoying to have a delay between issue/pr state changes and the search index picking it up.
What? React has nothing to do with current state of affairs. In fact, React on GitHub currently exists in mere islands, i.e. in Projects and recently in Pull Requests. Most of the frontend is still Web Components[1] paired with Turbo[2] for hot reloading. GitHub is still as slow even with JavaScript disabled, try it yourself. Backend just serves stuff really slow. In fact, there is an alternative GitHub frontend (no affiliation) that feels snappier and is written in React.[3]
With that said, Mitchell complains about outages. These started directly after Microsoft acquisition[4] and are attributed to migration from AWS to Azure.
Pull Requests is the thing that was wacky in the UI yesterday, coincidence or not? I have no idea.
Yesterday we saw PR pages that displayed no error, just displayed wrong info. I would have preferred to get an error page than outdated or empty lists. I was guessing this was related to the React migration but I don't really know.
Also, the browser back and forward buttons no longer work in pull requests when going between PR tabs (commits, checks, files changed, etc) as well as some other site interactions.
Like, what user-hostile intention was the reasoning behind that? I am literally imagining a product manager smoking a cigar and laughing at the RUM session replays of me losing my shit.
Fully agree. We really should punish companies that blatantly push this kind of mercenarism. I mean, every VP and CxO join a company, he/she takes super short-sighted decisions that push some random metric a bit up, and then they leave with a huge performance bonus not caring if everything is worse. They won't be around to cope with the fallout as they are already in another company doing the same.
I am not again performance bonuses, but they should be attach to better metrics. Eg the number of happy users is still up in 3 years time. Or something like this.
This is my darkly optimistic take on enshittification:
Companies know how to make good product, but if they don't have "new and shiny" to impress us anymore, then their only alternative is to make things worse so they can heel turn and then make things "better" by unmaking all of the worse things they did.
They can also milk their customers coming and going in the process.
It's not "enshittify or lose", its just raw greed. Things will get better again, either that or a competitor will destroy them. Enshittification is just the current meta and a new one will come soon enough.
I don't think companies know how to make a good product any more. Conway's law won this battle.
I think it's that company management has no incentive to do well. So they have no reason to push this down to the bottom tier of workers who actually make the products. The feedback loop is open. They make an order, the product gets worse, the line goes up, they don't know the product got worse and they have no reason to care anyway.
When is the "get better" step? I've only ever seen two things happen mid- or post-enshittification:
1. The company builds a moat and just remains shit.
2. New entrants either displace the company entirely (most likely) or competition slows the enshittification process (distant second) or reverses it (almost never).
It's not clear to me why "get shitty" is a necessary step to this. What part of GitHub's executives' grand plan is "have a barely-functional service that randomly prevents people from working"?
> What part of GitHub's executives' grand plan is "have a barely-functional service
What about lock-in, being a monopoly? Why wouldn’t you maximize on saving costs? Sure some people leave, but the majority is not going anywhere. And if the platform dies they’ve made more money than to keep it alive.
The enshittification process milks the current product of all of the money that can be wrung from it by any means just shy of immolation.
Companies aren't getting cheap loans right now so they're desperate to juice their stocks so that upper management can secure their bonuses.
That's why "get shitty" is necessary.
When they've wrung it dry, pocketed all of the crumbs of raw cash they can get, then they'll either collapse due to overmilking their products or they'll realize that the only way to refatten the calf is to bring in new customers, so they'll unshittify it for the fresh infusion of customer money.
It's a cycle, and one I predict will inevitably lead to many of these companies' collapse.
Depends on how strong a moat really is, but it can be "enshittify and lose", too. Enlightened (as opposed to short-term) self-interest may pay off after two years or twenty, depending, and in the latter case, it may as well not pay off at all as far as a public company are concerned.
I think Microsoft’s home game is “monopolize and enshittify”. They are the masters and know the exactly what amount of enshittification is too much. E.g. Hashimoto quitting GH is probably totally worth the 10 SREs they fired. Us plebs cannot go anywhere.
The idea was, move fast and break things - but then pick them up and fix them. Companies realised they didn't really have to fix them properly as the users still stuck around.
Yes, exactly. AI isn't some magic dust that you can sprinkle into your workforce and get more productivity and better results. It is at best a force amplifier for what you already have. If you're making awful and broken products, you will make even more awful and even more broken products at a higher rate than before.
It's not a coincidence that every impressive result done using AI has come from someone with a track record of impressive results before AI. AI isn't magic. It doesn't make you good at stuff you're bad at.
Microsoft had a very specific niche of making completely awful software that wasn't actually broken - in fact, that was often the infuriating thing.
If it just shat the bed completely, you'd have an easy argument to replace it with something else; instead, it would be technically competent (Hi, Raymond!) but covered in stuff that made it infuriating to use (Hi, Redmond!), especially if you didn't live in it day in and day out.
I think it's more people are checked out (and AI is one part of it yes), made worse by orgs who don't know how to lead/manage/change effectively.
FWIW, some people used to (or still do) say similar things that software is significantly worse because people use "unserious" languages like PHP, Ruby, Python, JavaScript. It brought about so much cool shit that I don't think it's worth saying we should've stuck with only C and Java.
I don't know if it's just because I was young and bright eyed, but it seems like the "passionate nerd" is somewhat absent in modern tech orgs. Seems like, starting around 6 years ago, none of the new hires seem to give a fuck about anything anymore.
That's definitely great for work life balance, and I don't think any less of them for that, but passion seems to be gone.
I would be doing what I do for work if I was employed or not. That's how everyone I used to work with was. Now everyone seems to do the minimal, with the goal being more to direct blame than solving neat problems.
I'm still optimistic. I think the number hasn't gone down, just the ratio. Software still offers a relatively well paid and comfortable career, so you naturally get people who just want to do a good job and that's it. Nothing wrong with that.
Used to be nerds hanging out on IRC, distributing Slackware, hacking trialware, modding games, etc. that had the passion and problem solving determination to do software work, which used to be harder due to lack of access to information.
OTOH what a great time for a budding engineer. I'm in my mid 30s, and no longer have the same stamina and passion as in my teenage/20s, but in the last 5 years I've learnt so many things I could not have done so back in the day. I learnt and experimented way more around random topics like compilers, OS, electronics, databases because of ease of access to information, AI (:shrug:), even though I have way less free time.
Github is going around boasting how many PRs they generate a day with Copilot with very limited human input. Whether that's true or not, it might have effect.
When did every company become a feature factory? Was tech ever not like this, or is it just how it works? It seems like they all end up this way, and it's really dumb.
Hardware, I don't know. Possibly always was too, I think even non-tech hardware was pushing more features as an excuse for shorter product lives back around the Great Depression, give or take a decade.
Managers now try to "extract value" quickly, leaving ruins behind them and not caring about the future as the immediate payouts allow them to stick to the "F*k you, I got mine!" paradigm.
It's slop from both sides, they're pretty obviously slopping their move to Azure, and at the same time being slammed with a Cambrian explosion of slop repositories.
Too bad it's not reminiscent of the Hotmail purchase where they tried to move off the BSD servers and ended up with new accounts on the relatively unreliable Windows-based setup, and old accounts routed to the original BSDs.
Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well?
I don't think that learning from textbooks to take an exam and learning from the answers of another student taking the exam are the same.
Joking aside, I also don't believe that maximum access to raw Internet data and its quantity is why some models are doing better than Google. It seems that these SoTA models gain more power from synthetic data and how they discard garbage.
Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me?
I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits.
Yes, what the LLM providers did was worse and impacted people financially a whole lot more in lost compensation for works as well as operational costs that would never reach the heights they did solely because of scrapers on behalf of model providers.
Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn.
We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge.
All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge.
An app like Cal.com can be vibe coded in a few evenings with a Chrome MCP server pointed to their website to figure out all the nooks and crannys. The moat of Cal.com is not the code, it's the users who don't want to migrate.
The real answer is they are likely having a hard time converting people to paid plans
Exactly, that's why most Saas companies are in a very tough position.
You have to bring value that goes beyond the source code and hosting, otherwise your clients are going to vibe code a custom solution instead of paying you.
> otherwise your clients are going to vibe code a custom solution instead of paying you.
How many things do you want to be responsible for? How many vibe coded projects do you want to maintain?
I think this line of reasoning is overblown. Just because you can doesn't mean a significant number of people will. I think the 3D printer comparison is apt.
Same story as always, writing the code in the easy part. Requirement gathering, analysis, consensus, direction, those are all the hard parts. Enterprises have a business to run and don’t want to run a software shop on top of everything else.
The story is usually that businesses don't want to commit to indefinitely expending their limited efforts maintaining software which isn't part of the company's core competencies. Most of the cost and effort of software happens after the first release is delivered.
> Enterprises have a business to run and don’t want to run a software shop on top of everything else.
It sounds like you mostly understand here. The biggest part of "running a software shop" they want to avoid is responsibility for support, bugs, fires, ongoing maintenance, and legal issues, of post-release software.
Dave's Pizza around the corner doesn't make a social media app, not because Dave can't figure it out, not because he can't vibe code one, not because he can't contract someone to do it, but because running a social media site isn't a core competency of Dave's Pizza. Instead, Dave uses existing social media sites, and focuses his efforts and passions on making pizza.
So I work in enterprise tech. consulting, my current project is with a large, global, chemicals company (it wouldn't be right to call out my client by name). This client is extremely competent from their multiple enterprise architects down to their analysts, they're a pleasure to work with. One of the business requirements could be met by a very simple in-house developed and hosted API, it's a perfect use case for GenAI assisted coding too. There's no magic, it's a problem solved over and over already. However, they don't want to touch inhouse dev with a 10 foot pole for the reasons we're both talking about. They don't want to support it, extend it, back it up, monitor it, and all the other things that have to happen after the code is done. They're perfectly happy to buy licenses from a saas so if anything goes wrong they can tell the CTO "it's not me, it's them". And when the CTO says "why doesn't it do this too!?!" they can say "i'll call our rep and ask".
saas value to an enterprise is more than just the functionality provided and I think that is lost on a lot of the heads down software devs here.
They are, and always have. Looking over "software engineer" roles in my local area, I see folks at companies in a variety of industries: finance, health, logistics, health care, and the local power utility, all well outside the software industry.
Most enterprise companies don't develop everything in house, but usually do have a varied mix of in-house infrastructure, IaaS and PaaS solutions, and SaaS products. Large organizations across varied industries often have multiple internal dev teams, and the availability of increasingly sophisticated AI tools is going to enable the same teams to be effective at more, and more complex, projects. AI will definitely start shifting make-or-buy decisions, especially for mature, commodity use cases, to 'make'.
I don't think it's much cheaper. Writing some code to do some CRUD has always been easy. Getting to a proof of concept is definitely quicker. But creating something that can be relied upon in production? That's as difficult and time consuming as it has ever been.
Yup. I've explained it as okay, some software is free as in beer and others are free as in speech. DIY software is free as in yacht.
It sounds nice, but now you have something that takes an enormous amount of time and effort to use and maintain, plus you need to have someone with the skills to run it.
They won’t, because specialization is a key aspect of capitalism.
This is why companies outsource anything. Google, Inc. is big enough to own farms and ranches to grow the food eaten in its cafeterias. They could make trucks to transport that food. They could operate factories to make cutlery, etc. Why do they instead choose to pay layers of margins to layers of middlemen?
Absurd example? How about Apple? They outsource production of their chips, instead of capturing the margin they are currently gifting to their partners. Why?
Delta Airlines doesn’t operate oil fields or even refineries even though a major cost of their operations is jet fuel. Why?
Once you can reason through these very simple examples, you will understand why enterprises are unlikely to walk away from SaaS.
s/Delta/United/ or s/Delta/Southwest/ or s/Delta/Lufthansa/. Or if you prefer, s/refinery/oilfield, or s/refinery/pipeline. Or even s/refinery/farm/ because Delta also buys food in vast quantities (I would not be surprised to find they have interests in ag producers that offset a small % of their food purchases, which does not diminish the argument).
Delta also does not make airplanes, jet engines, seats, radios, GPS, glass, or even wires. They don't distill the spirits they serve on their flights. They don't own and operate a satellite Internet capability. They don't even make movies for in-flight entertainment.
The point is that Delta, like most successful firms, outsources key aspects of core service delivery.
The second article you linked says plainly that the refinery is an offset/hedge. QED Delta still outsources the vast majority of its fuel costs. (They could, for example, own large swathes of the Permian and do E&P as well. They choose to leave that to others.)
Vertical integration has been a common practice in industry for 150 years.
Yes, very few firms fully control their upstream supply chains, but very few conversely produce nothing but their core market offering in-house. Most companies are somewhere in between, doing some things in-house, and obtaining other things from vendors.
Most large firms have in-house software dev teams responsible for at least some portion of their development work. I know software engineers locally working, variously, at banks, pet supply distributors, power companies, soft drink bottlers, and many other non-tech industries. And AI can and will extend these teams' capacity to internally manager larger segments of their companies' tech stacks.
> How many vibe coded projects do you want to maintain?
here comes the next SaaS idea - vibe coded services as a service. You tell what service you want, may be point out a couple examples, and you get that service vibe coded and hosted for you for a small monthly fee!
I agree with the other poster that mention this is likely a publicity stunt but all it's really showing is that VC is still incredibly stupid with their money. All the more reason to seize it from them then properly fund useful software and not subsidize vanity projects for stanford grads.
About the friction, not the capabilities...I haven't switched off my biz calendar/appointment provider I'm paying for even though I've kinda outgrown it.
Email is actually a excellent example of something with network dependence. Changing email providers requires that you change your email address too (unless you own and use your own domain). An address change causes friction from having to update the network of contacts and services which used your old email address.
For many use cases, maintenance doesn't matter. At this point, using LLMs to one-shot a tool/service for a single use or time-limited use case is becoming more appealing than signing up with some vendor, even for free.
May be trying creating one and see how much effort and time is required to clone such a functionality to a proper working state! Something for personal use can be created in about 5-10 days, but even then the skill that is required and the amount of tokens to burn, hosting and security etc, will easily kill. This is exactly the thought process of many, but it will surely kill many opensource contributors. I've stopped committing anything to any open source repos as a personal choice. I do not want to train a LLM which will eventually create more slop and headaches since for me, time is the only important factor which holds the maximum value! Nothing else!
At risk of self promotion, I think more people should adopt something like the Ship of Theseus license (https://github.com/tilework-tech/nori-skillsets/pull/465/cha...). It's not obvious if this will patch the clean room hole in licensing, but I'd rather see it play out in court than assume opensource is just fully dead
I am incredibly skeptical that license is legally meaningful. (but obligatory IANAL.)
Generally speaking it is very very difficult to have a license redefine legal terms. Either this theseus copy is legally a derivative work or it isn't, and text of a license is going to do at most very very little to change that.
> It's not obvious if this will patch the clean room hole in licensing, but I'd rather see it play out in court than assume opensource is just fully dead
IANAL, but I don't think there is any "clean room hole in licensing": licensing is downstream of copyright law, and clean-room reverse engineering, if done properly, results in products that do not infringe the copyright of the originating work to begin with, so the license therefore never applies to them.
The "Ship of Theseus" license you've linked to attempts to define for itself what constitutes a derivative work, but what is and is not a derivative work is determined by copyright law itself, and there's no concept of imposing licensing conditions on works that your copyright never extended to in the first place.
Simply put, if something isn't infringing your copyright under the criteria established by the law, then your permission was never needed to do it in the first place, so the conditions under which you would or would not be willing offer that permission are irrelevant.
If someone spends years using your software and they have learned a mental model of how your software works, they can build an exact replica and there is nothing you can do about that since there is no copy you can sue over. Said user is also allowed to use AI tools to aid in the process.
What you want is an EULA, which is a contract users explicitly have to agree with. A license file only grants access or the right to copy, it doesn't affect usage of your software.
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.
After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.
Even worse, in that Microsoft's FUD was mostly right. The joke about Open Source being communism played out straight - FOSS pretty much destroyed the ability to make money on software products, accelerating transition to SaaS models where you can carefully seek rent from the shelter of your secure company servers (later, cloud), and that is in large part responsible for modern surveillance economy - as it turns out, some SaaS segments decayed to "free with ads", where - much like with OSS and locally-run software - you cannot compete on price with free.
Given how many developers here use LLMs daily, how do you think about defensibility? Tools like this seem relatively easy to reverse-engineer and replicate with enough time and LLM assistance. Did that influence your decision to charge a subscription or the change to a personal license?
That's the reason why I added a subscription in the first place - you would pay a dirt-cheap price for a "boring" product with an added insurance that someone will be there to support it.
People will replicate it, sure, but supporting it regularly is another thing. I guess the majority wanted a perpetual license - so it's a win for the masses.
defensibility nowadays is app support and development. the more work you pour into it the more defensible it will be.
I personally would gladly pay to have app constantly polished and improved. What I would not use is some vibe-coded alternative that was slopped with AI in a day and pushed to github with a tweet "i made a free X alternative" and then abandoned.
Honestly, I have tried to really cut down on my usage of 3rd-party dependencies when possible. In a way, it's kind of freeing. Whatever I still need, I write myself. If I cannot write it, then I try to find something FOSS. If I find nothing, then I consider purchasing something.
For example, I am rolling my own window manager (that needs some much needed TLC). I ditched Alfred for Spotlight. Though Alfred is better, I will survive just fine. And the list goes on.
I am not trying to take a dig at the OP. I am sure he or she put effort into this application. But I am genuinely curious -- does anybody actually need this software? Cmd+Tab, a decent window manager, and Spotlight would solve the same problems for free.
I fresh install to give myself a different perspective when I feel like I have too many 3rd party solutions to problems that no longer exist. Spotlight is better and I only casually use my macbook nowadays, so I don't need the power of Alfred. I don't need dock extensions because Stage Manager is mediocre but works well enough for the browser, chat / music apps, and whatever document I'm working with at the time.
how much is there to improve and polish for a taskbar? at most it will be keeping up with macOS throwing breaking changes at you and maybe one or the other weird bug.
Personally, I dare not replace the Dock with Windows-style task bar for fear that my OLED display might have burn-in on it.
Yet, when I need an alternative, I would rather make an APP for my own.
the notation is just an array of move tuples, each tuple contains 1 move for white and 1 move for black, where each move is written as <1st letter of piece name><destination square>
> In this Friday’s magic demonstration, I’m going to show how what you see in Privacy & Security settings can be misleading, when it tells you that an app doesn’t have access to a protected folder, but it really does.
reply