Seeing similar comments across different articles and technologies and it makes me wonder how much AI is going to hold back the adoption of new technologies going forward.
I find it a bit ridiculous to see that people or whole teams don't want to use JJ (or any tool, for that matter) because in essence they hired a "support intern" (read: AI assistant) who doesn't know or want to use it.
Even breaking changes in a library can cause massive pain. AI constantly wants to the use the old pattern it was trained on. Up until a few months ago it would always prefer using the 2021 edition of rust rather than 2024 for instance. Overall though I don't think this is insurmountable, but it definitely adds an impediment which will overall motivate people to do things non optimally for the foreseeable future. Luckily, not everyone is bound by AI today. I just write my own commits messages instead of letting the AI do it. jj is popular enough that I'm sure it'll learn how to use it in the next year (especially Gemini because Google is using jj internally and would probably train it on it as a result)
I noticed it get a lot better with the 4.5 models specifically. Before then, I would have to tell Claude "do a web search for the docs for jj, then do XYZ." Now I just tell it to "do XYZ using jj"
I agree it will hold back new technologies, but, at the same time, I'm not sure what the value add of new technologies will be going forward. Often, as is the case with git vs. jj, the value add of a new technology is mostly ergonomic. As AI becomes more ingrained in the development flow, engineers won't engage with the underlying tech directly, and so ergonomic benefits will be diminished. New technologies that emerge will need to provide benefits to AI-agents, not to engineers. Should such a technology emerge, agent developers will likely adopt it.
For this reason, programming languages, at least how we understand them today, have reached a terminal state. I could easily make a new language now, especially with the help of Claude Code et al, but there would never be any reason for any other engineer to use it.
I'm quite surprised to hear that "programming languages have reached a terminal state": there are (imo) at least four in-progress movements in the industry right now:
Even if LLMs become capable of writing all code, I think there's a good chance that we'd want those LLMs writing code in a language with memory safety and one amenable to some sort of verification.
I didn’t quite mean that programming languages have reached their terminal state, although I understand how my comment was interpreted like that. I meant that programming languages, as we known them today, have reached a terminal state.
Using Rust as an example: Rust aims to provide memory safety at the expensive of developer ergonomics. Personally, I shy away from Rust because I don’t like fighting the borrow checker.
However, with AI agents, Rust could make a lot more sense. Strict errors at compile time are helpful to an agent, which is more than happy to smash its head against the wall until it reaches a working solution.
Following this logic, we could see languages develop that are extremely impractical for humans to use yet provide benefits like memory safety or correctness. But these languages might not look anything like the languages we’re currently used to.
Even if you're not authoring changes as much, change management is likely still to be a very useful activity for a long while. Also note that not everyone is using AI today, and many that do only use it as glorified auto complete. It will take many more years for it's adoption to put us in a situation like your describing, why halt progress in the meantime? My personal productivity increased greatly by switching to jj, perhaps more than adding Gemini CLI to my workflow. I can more confidently work on several changes in parallel while waiting on things like code review. This was possible before but rebasing it and dealing with merge conflicts tended to limit me from doing it beyond a handful of commits. Now I can have 20+ outstanding commits (largely with no interdependencies) and not feel like I'm paying much management overhead while doing so. I can also get them reviewed in parallel more easily.
> For this reason, programming languages, at least how we understand them today, have reached a terminal state. I could easily make a new language now, especially with the help of Claude Code et al, but there would never be any reason for any other engineer to use it.
This is an interesting opinion.
I feel we are nowhere near the terminal state for programming languages. Just as we didn't stop inventing math after arithmetic, we will always need to invent higher abstractions to help us reason beyond what is concrete (for example, imaginary numbers). A lot of electrical engineering wouldn't be possible to reason about without imaginary numbers. So new higher abstractions -- always necessary, in my opinion.
That said, I feel your finer point resonates -- about how new languages might not need to be constrained to the limitations of human ergonomics. In fact, this opens up new space of languages that can transcend human intuition because they are not written for humans to comprehend (yet are provably "correct").
As engineers, we care about human intuition because we are concerned about failure modes. But what if we could evolve along a more theoretical direction, similar to the one Haskell took? Haskell is basically "executable category theory" with some allowances for humans. Even with those tradeoffs, Haskell remains hard for most humans to write, but what if we could create a better Haskell?
Then farther along, what if we created a LEAN-adjacent language, not for mathematical proofs, but for writing programs? We could throw in formal methods (TLA+) type thinking. Today formal methods give you a correctness proof, but are disconnected to implementation (AWS uses TLA+ for modeling distributed systems, but the final code was not generated from TLA+, so there's disconnect). What if one day we can write a spec, and it generates a TLA+ proof, which we can then use to generate code?
In this world, the code generator is simply a compiler -- from mathematically rigorous spec to machine code.
(that said, I have to check myself. I wonder what this would look like in a world full of exceptions, corner cases, and ambiguities that cannot be modeled well? Tukey's warning comes to mind: "Far better an approximate answer to the right question, than an exact answer to the wrong question")
Ah yes, the inevitable future where the only way we'll know to interact with the machine is through persuading a capricious LLM. We'll spend our days reciting litanies to the machine spirits like in 40k.
Praise and glory be to the Agentic gods. Accept this markdown file and bless this wretched body of flesh and bone with the light of working code. Long live the OpenssAIah