I'd say the new problem is knowing when understanding is important and where it's okay to delegate.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.
We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.
Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.