That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.
If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.
Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.
Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.
It's a servant, in the Claude Code mode of operation.
If you outsource a skill consistently, you will be engaging less with that skill. Depending on the skill, this may be acceptable, or a desirable tradeoff.
For example, using a very fast LLM to interactively make small edits to a program (a few lines at a time), outsources the work of typing, remembering stdlib names and parameter order, etc.
This way of working is more akin to power armor, where you are still continuously directing it, just with each of your intentions manifesting more rapidly (and perhaps with less precision, though it seems perfectly manageable if you keep the edit size small enough).
Whereas "just go build me this thing" and then you make a coffee is qualitatively very different, at that point you're more like a manager than a programmer.
> then perhaps AI is the electric scooter for the mind
I have a whole half-written blog post about how LLMs are the cars of the mind. Massive externalities, has to be forced on people, leads to cognitive/health issues instead of improving cognition and health.
I’ve also noticed that I’m less effective at research, but I think it’s our tools becoming less effective over time. Boolean doesn’t really work, and I’ve noticed that really niche things don’t surface in the search results (on Bing) even when I know the website exists. Just like LLMs seem lazy sometimes, search similarly feels lazy occasionally.
This is the typical arrogance of developers not seeing the value in anything but the coding. I've been hands on for 45 years, but also spend 25 of those dealing with architecture and larger systems design. The actual programming is by far the simplest part of designing a large system. Outsourcing it is only dumbing you down if you don't spend the time it frees up to move up the value chain.
Talk about arrogance, Mr 45 years of experience. Ever thought that there might be people under skyscraper that is your ego? I’m pretty sure majority of tech workers aren’t even 45 years old. Where are they supposed to learn good design when slop takes over? You’ve spent at least 20 years JUST programming, assuming you’ve never touched large scale design before last 25 years. Simplest part my ass.
> Ever thought that there might be people under skyscraper that is your ego?
I do, which is exactly why I found the presumption that not spending your time doing the coding is equivalent to a disability both gross and arrogant.
> Where are they supposed to learn good design when slop takes over?
You're not learning good architecture and systems design from code. You learn good architecture and systems design from doing architecture and systems design. It's a very different discipline.
While knowing how to code can be helpful, and can even be important in narrow niches, it is a very minor part of understanding good architecture.
And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
> And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
"I reckon this is even the case when using it as an interactive encyclopedia".
Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.
OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.
I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!
It's even more general than that: LLMs seem to be exceedingly good at translating bodies of text from one domain to another. The frontier models also have excellent natural language translation capabilities, far surpassing e.g. Google translate.
In that sense, going from pseudocode to a programming language is no different from that, or even translating a piece of code from one programming language to another.