People are worse at mental arithmetic than they were in the recent past, so it's not clear that they aren't "dumber" in the sense people meant at the time.
And did our thinking about the importance of being good at arithmetic change in response? I think so.
We also used to be much better remembering things, when we relied on oral histories, our memory skills have degraded quite a bit. And there's a quote from Socrates criticizing how writing is a crutch that degrades our skill (https://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1... , the last bit). Over time, we've just moved to valuing other things more.
Well, with anything, practice is key. When I was in school, I was in a math competition where you had to do everything in your head. There was no scratch paper, you could not modify your answer once written, and erasing was obviously not allowed either. I wasn't the greatest at it, but I didn't suck at it either. That was decades ago, and I no longer do math in my head that way. What I used to do in seconds for a result now takes a couple of seconds to think about what needs to be done and then the time to come up with the result.
Students score lower on standardized tests in the 2020s than those in the 1990s. So your stance feels misguided. Although I don’t think Google and Calculators are the main culprits, I do think it’s due to larger technology/internet landscape.
I once worked for a guy who typed 7 + 4 into a calculator, after freezing for 1.5 secs trying to work it out in his head. It was in a "stressful" situation (not something extreme, we just were in a hurry), and I'm sure the guy could add those numbers in his head, generally... he owns his own business, after all. It took so much out of me to not move a face muscle.
Sounds like you haven't used it much. It starts small with you forgetting the arcane params to commonly used tools that you don't need to type anymore. Where it will stop nobody knows.
Nothing stupid about caring deeply about tools that shaped your career. GitHub wasn't just a SaaS for a lot of us it was where we learned to build. The fact that you're emotional about it says more about how much you gave to that platform than anything else.
Ghostty will be fine wherever it lives because people follow the project and not where it's hosted. Best of luck!
So true! This quote from the blog post really hit me:
> Since then, I've opened GitHub every single day. Every day, multiple times per day, for over 18 years. Over half my life. A handful of exceptions in there (I'd love to see the data), but I can't imagine more than a week per year
How could you not feel this way about a tool you willingly use this much? Perhaps if your employer is forcing you to use it, its different. But maintaining OSS? that's a labor of love. How could you not get emotional?
I agree with this sentiment, because the person directing the agent can still direct it in a way where it'll produce a better or worse output than another person directing it.
What gets me is the craft point. I've shipped more useful software in the last year than probably the previous five combined, and most of that is because I stopped treating code as the artifact and started treating the product as the artifact. The craft moved up a layer.
> until it is clear and elegant
New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.
There's a lot of ways to ship things & iterate without having any idea what you are building or doing technically, without building any tastes for how things work.
Those people are going to be the absolute most dangerous possible thing you can do to a company.
Maybe some day we can just totally give up the technicals to the machine, but I strongly doubt it. Every single model is both brilliant, but also a fool, no matter how frontier it is.
Yes, the feedback loops are faster. But you need to assess what's actually technically happening. Someone does. Maybe you offload the actual thinking up the chain, delegate taste understanding and judgement to only people up the chain, and make them all go mad dealing with endless slopcoding they are being hit with. But just as bad, that junior engineer is robbing themself too. Maybe they get away with not looking, but they sure aren't going to learn a lot.
I'm missing the link but there was a great submission maybe a month ago about two hypothetical grad students, I think in astronomy, where one failed and flailed and did things largely the old fashioned way, and the other used AI to get it done. The advisor couldn't really tell who was doing what. But at the end, one student had learned & gained wisdom, and the other had served as a glorified relay between the AI and the advisor and learned little. Same work output, but different human outcomes.
Junior engineers are really not that cheap. Relative to your capabilities you are not a bargain. You take a ton of valuable time from other people. If a company is hiring you, they either are truly fools lacking basic understanding, or they are in on the bargain that they want you to be getting better, are testing to see if you can become more useful. Sure it's great to show up and have impressive output, but you need to actually be learning and growing. You need to be participating in the feedback loop actively. Or you will be lapped by people who care & think like engineers.
> Those people are going to be the absolute most dangerous possible thing you can do to a company.
I hear you, but here's the thing: the companies don't give a shit about software quality any farther than it takes to keep you coming back as a customer. And it's actually been like this for a long time. They're going to hire people who can ship who-cares-how-buggy software as fast as possible. It's better for the bottom line.
And that pains my soul and pains me as a consumer (because we already had to put up with too much crap software before genAI started producing it in reams), but there's very limited money in the kind of quality you're talking about.
I hear stories from people interviewing now--the interviewers react negatively if you tell them you're working on keeping your programming skills fresh. They just want to know how many agents you can run at a time and how many lines of code you can generate per day.
Personally, I think someone skilled in software development working with genAI is going to be more productive than someone not skilled working with genAI, but I don't think that's even being selected for now.
Grim days.
The one thing that gives me hope is that every time we ask our graduates who are now in the field (and all work with AI) if we should drop classic CS education and only do AI, they all emphatically reply in the negative. Yes, we need some AI education in there, but they want the foundation, too.
It is rather backwards. I've not seen things quite as bad as interviewers wanting to know how many agents you can run, but the attitude of "launch & fix later" is always present and kind of depressing.
Then I think of the companies (not necessarily software) that have had long term success and their products have been quite high quality at least at some point in time. The count of genAI instances someone can keep in flight is certainly a weird metric that I think will hurt the companies who choose to ignore quality.
Unfortunately it's a long process as it's possible to get very far with great marketing and sales with a poor quality product too. Then cash out before customers figure out that there's something else that is better. I have no idea if this pattern will ever self-correct.
Off topic: I followed your guides for network programming years ago getting my tiny C server/client setup working. Thank you so much for writing them!
This person is an educator. You should absolutely learn how to code by deep practice. You can easily learn how to use the slop machine in I don't know a week or something if the job demands it.
Absolutely wild to see this take downvoted. While it's abundantly clear that Hacker News has long since become a mouthpiece for the AI investment machine, I really hadn't felt the loss of strong engineering ethos until recently.
So now we're downvoting the idea that people should have a strong understanding of how to code? We're cooked. A week does seem about right for getting to 90% of optimal AI agent use if you earnestly explore its boundaries.
The slop machine is stupidly easy to use. Recently switched jobs and got to use Claude Code for the first time. Literally just talk to it. There's nothing to learn.
> New grads who spend weeks refactoring code are going to get lapped by new grads who ship something and iterate. There's just a faster feedback loop now.
With AI becoming so prevalent, on the long run I won’t be so sure. True experts will become more and more rare.
I guess my response to this would be that not everyone is the same. You might like to ship more useful software. I got into writing code and made a long career out of it because I like writing code. Not "making products." Not "shipping things." Yes, making products and shipping things were necessarily a small part of my career, but that's not what got me out of bed in the morning.
It's like telling a writer "your job is now to bind up books and place them on the store shelves." OK, but it's a totally different job and not exactly one the writer is going to like.
We have to ask ourselves what the purpose of refactoring is. People use that word like some magic incantation, as if the value of some particular instance of "refactoring" were self-evident. "What are you doing?" "Oh, I'm refactoring X." "-hushed tones- Ohhhh, yes, carry on, then..."
Refactoring improves code organization. It makes the code more maintainable, arguably and more reusable. And, from an academic POV, makes code more satisfying conceptually by aligning it with the model of a domain more clearly and conspicuously. Good stuff.
Great. Now, in industry, what matters is the result. Nobody cares if the result was produced by a witch casting magic spells or a grunt hitting a rock with another rock. Industry is practical. It cares about "craft" as far as it enables commercial success (and yes, short-term thinking can be bad, but guess what: you need to eat in the short-term!). Maintainability is a nice thing to have, because it does allow us to more quickly develop code. But how maintainable something needs to be, especially in relation to other competing concerns, has no fixed answer. It really depends on the situation.
Practical wisdom, known as prudence in the classical literature, is the foundation of all moral behavior. The right decision, the right concern, really does depend on the circumstances. You cannot derive from principles, from the armchair, what the right course of action is for everything. The general principles may be immutable and absolute and fixed, but the way in which they are applied in particular circumstances will vary.
Academia can insulate people from certain kinds of practical concerns, which is supposed to aid theoretical work, but this demands that the academic recognize his limits. He is not in a position to pass judgement on prudential matters, which is to say matters that are not strictly matters of principle, if he is not prepared to engage competently with the concrete reality of the situation.
yeah, but everyone can use the coding agents, not everyone can choose well. the rarer the skill the more you get paid for it. also, the better you code the better the results you get from the agent
The irony runs deeper than the free analysis offer. The whole Mercor contractor relationship was this exact pattern: hand over studio-quality voice recordings and ID scans to get paid for data labeling work that didn't require either. "Explicit consent" was buried in the terms, and people clicked through because they needed the paycheck.
Now 40k people have learned that biometrics aren't passwords. You can't rotate your voice.
Vocal lessons are both a lot of fun and a lot of work. I haven't been using any voiceprint systems but I know most humans are unable to tell that my trained voice is the same physical person as my old voice. Would be curious to find out if an AI voiceprint system can discern whether it's the same or not.
You’ll really like this then, it’s a clip of Phil Hendrie who I recently discovered. He does tons of voices and sound effects, his studio has multiple microphone and switches between them for different speakers.
Here is a clip of him when someone called his studio thinking they were the local Pizza Hut. Phil does all the other voices, including the phone system.
Are you talking about singing lessons, or actual talking training? Singing lessons helped me sing but didn't change the way i talked at all, but i was only able to afford them for a summer so maybe it takes more time than that
When I was in NYC a while back, I met a woman at a friend's dinner party. She sounded totally American, but was in fact Brazilian. She worked as a lawyer, and said that she'd had to get extensive voice training in order to sound American so that people would take her more seriously professionally. I have no idea if the professional part worked, but the accent, mannerisms etc was amazing - I would never have guessed.
I'm referring to speaking, not singing. After a _lot_ of work, I can speak passably as a woman or man and switch freely between the two. Depending on context I generally choose just one for the entire conversation, as switching tends to cause whiplash in the listener (^_^).
The ability to switch mid-sentence is mostly just something I discovered I can do and is fun. But the ability to pass as my real gender is something that helps me feel safe. And when needed, being able to occasionally pass as my prior gender (e.g., when calling my bank until I can change my name/gender legally), it also quite useful.
More or less everything changes. For trans men who are on HRT the voice's lowest pitch will get lower, as it would for someone AMAB going through puberty (since a second puberty is literally what's happening on HRT). Trans women do not get any voice changes from HRT though, so they train to raise their larynx when speaking to get up into the "perceived female range."
But pitch is far from the only thing that someone gendered one way or another in western culture (and presumably elsewhere). Resonance, weight, breathing patterns, word choice, and prosody all matter too. That's way too much to go into in a post here on HN, but the easiest one to understand is resonance or "size." Female-perceived speakers have higher resonance / smaller size. This means that some of the higher harmonics are amplified more than the lower harmonics, an it's called a "small size" because the actual resonating area from the larynx to the tongue is made smaller (mostly through tongue placement). Male-perceived speakers do the opposite, creating a larger space for resonance and resulting in a lowered resonance.
I know quite a few cis people who are also going through some of this training to help with their voice acting, or even just for fun.
There are a lot of good (and unfortunately some bad) resources online for trans voice training in both directions. My personal favorite (and where I started my lessons) is Seattle Voice Labs, but Online Vocal Coach / Vox Nova is also a great resource.
Thanks - that matches my little observations and clears up a few questions I had. I did notice that spectrogram views are almost the same regardless of perceived gender except that the strongest bands and their distributions change, and also that perceived pitch isn't as dynamic as actual frequency shifts of harmonics, but didn't realize that the center frequency moving up had to do with both physical and figurative size. It makes sense.
One thing I've been feeling itchy regarding this domain is that a lot of existing resources are shallow and there aren't many gamified options even though things like rhythm game feels like a perfect fit. I think not a negligible number of people, especially young and inexperienced, are struggling with aggressive or dis-satisfied sounding voices against their intent. Just a laptop app to feedback the error between their intended voice and recognized voices to let them minimize the error feel like a useful thing to me.
I've seen some people attempt to make a voice gender recognizer through machine learning, but nothing that has actually worked well. I'm not an ML expert, but I expect they're overfitting on just a small subset of accents and specific formants. The one I played with (can't find the link) was just trying to show weight vs resonance, and it got very confused by my voice. I am never misgendered from my voice but I (intentionally) have a slightly deeper femme voice than what it had been trained on.
IMO, the best way to learn to control your voice is to learn to hear variations in size, weight, pitch, open quotient, prosody, etc. From there you can become your own coach so you're not focused on an app while having a real world conversation with someone. Would a gamified version help? Possibly.
(sorry for the delay, I have no idea if you'll see this. I wrote it last week but apparently never pressed the submit button??)
Do you need to calibrate it to be able to repeat it, and does that calibration change if you are at a different altitude and in different conditions, such as humidity?
Does merely changing altitude (or ambient pressure) change voice enough to be considered different by a recognition or synthesizing system?
Do you have a source for that? I can tell with pretty good accuracy whether my students smoke from their voices (adult language learners, we take smoke breaks together and they have no reason to conceal it), and would be very surprised if I’m just that lucky and there’s nothing a person can pick up on acoustically.
There's this myth (that came to you in pop culture) that you end up sounding like Tom Waits.
In reality, some phlegm aside, their voice is still the same in any way that matters.
If you knew people who didn't smoke and started (not uncommon in the 80s and 90s, quite a few people I know started smoking in university, or after the stress of a first job, some even later), and also the inverse, you can trivially hear it for yourself.
My voice is exactly the same as before I started smoking heavily, and I have never had any of the associated problems that most people seem to have (lung capacity, stamina, infections, phlegm etc) - pot luck I guess, like most things
Also: it’s not just the first order smoking, respiratory issues, increased chance of illness, and chronic coughing can damage your voices presentation.
I have been telling people for years that biometrics (face, fingerprint, voice) is your username, not your password. But people are easily swayed by convenience.
For all intents an purposes it is, especially face ID. Also, courts in most countries can compel you to provide biometrics. But many cannot compel you to reveal passwords.
This is an important point with biometrics that most people don't realize. When I say that biometrics aren't good security, most people are perplexed because they have seen movies and such that are high-tech where iris scans or fingerprints are the pinnacle of security.
I like to tell them this story that I read somewhere a decade or so ago. It might not be a true story (I never checked) but it's a helpful way of thinking about it.
Bob landed a great job and decided to celebrate by buying a new luxury car (a BMW in my recollection, but could be wrong) that had a thumbprint authentication for unlocking and for starting it, so you never have to carry external keys. One day a thief decided to steal Bob's car. They broke in to his house and tied him up. When they demanded the keys and he said there weren't any, they decided to cut off his thumb and use it as the key. Now Bob has no thumb and his car still got stolen.
The story I remember is French police units specifically launching focused investigation on the sudden explosion of crypto people / family members getting kidnapped and having a finger or more chopped off.
I did find your story from 2005 about a man having his finger chopped off once the thieves realized they would need his appendage every time in order to start the car2.
> Now 40k people have learned that biometrics aren't passwords. You can't rotate your voice.
Voices aren't strong.
There just aren't that many unique characteristic parameters behind a voice - it's largely dictated by an evolutionary shared shared larynx and vocal tract. They aren't fingerprints.
The fact that human voice impersonation is not only widely possible but popular should give you an indication of this. Prosody, intonation, range, etc. - it's all flexible and can be learned and duplicated.
The signals are simple too, because we have to encode and decode them quickly. You may or may not be able to picture and rotate an apple tree in your head, but you can easily read this sentence in the voice of David Attenborough.
Moreover, you can easily fine tune a voice model to fit any other speaker. You can store the unique speaker embeddings in a very thin layer. Zero and few shot unseen sampling can even come close to full reproduction. You can measure this all quantitatively.
Voices are not, and never have been, fingerprints. They're just not that unique.
You can rotate your voice with substantial effort. Just speak differently: higher or lower pitch, a different accent. Your friends may look at you funny for the first few years.
That's a pretty good swap if you're Microsoft. Exclusivity was already unenforceable in practice, and they were going to have to either sue their biggest AI partner or let it slide. Instead they got the agi escape hatch closed and a revenue cap that at least makes the payments predictable
The Crunchy Data part is what people should pay more attention to here. He had corporate sponsorship and it was working. Company got acquired, new owners didn't prioritize the same things, and now 3.8k-star critical infrastructure goes dark. Your backup tool's funding depended on someone else's M&A strategy and you had no idea.
I've been gradually moving my own stuff to SQLite and git-tracked files partly because of this. Every managed Postgres setup has a dependency tree of tools maintained by people whose funding situation you know nothing about.
SQLite doesn’t depend on donations. They have a consortium, sell licenses (it is open source but some companies like the explicit CYA), sell support contracts, sell an aviation-grade test harness, and sell extensions.
Of course there is always the risk it goes out of business like any other company, but it’s not funded like your typical small open source project and doesn’t even allow open contributions (not necessarily a bad thing IMO but it’s just a totally different type of project).
Is there a reason why more OSS projects don't follow this model? It sounds like you are saying that there are clear advantages here that other OSS projects lack.
SQLite is arguably the most widely deployed database in the world. It also has its roots in government/defense contracting so it was built with navigating that kind of red tape in mind.
Most OSS projects simply don’t have that kind of weight or discipline to follow SQLite’s footsteps.
I suspect the government contract roots are what lead to it being placed in the public domain.
It did not have to, they could(and some would argue probably should) have gone the normal copyright with public use license route. But I suspect that because US government code by default is in the public domain(the US government has means other than copyright to protect it's IP) and this code was originally written for a cancelled US government project. That was their default mindset when they wanted to release it.
Note that I am using a sort of editorial they here, I think it was largely the effort of one person.
It is probably telling that with fossil, a supporting project to sqlite, they went the more normal route and released it under copyright with a BSD type license.
I like the idea of public domain(some things belong to us collectively), but it does raise an interesting question if a private individual can place something in the public domain. Are you allowed to give up your rights?
There are business models that work for the extraordinarily popular open source projects (Linux, SQLite, etc.) that don't work for the "well-used piece of infrastructure" projects, even though that category is very important in aggregate
pgbackrest also was part of an organization from what I understood from the post. The organization got acquired. I don't see how sqlite is shielded (or any project really). They could get acquired. They could not have enough customers. They could go the wrong directions and lose customers. They might have a few high profile bugs so that customers lose faith in them.
PGBackRest was sponsored by some specific organizations, but not owned by them, and PGBackRest was not their product.
SQLite is in a whole different league when it comes to funding, corporate support, etc. There are commercial contracts directly tied to its ongoing support and development. As far as I understand SQLite is Hwaci’s bread and butter.
They have more sponsors/clients so a single company changing direction wouldn't kill them. They also sell directly if you want to buy from them. But ultimately the risk still exists.
I interpreted it as the problem being that the technology may end up unsupported. I mean you can also keep using pgbackrest now. It's not like the code is gone.
The favourite model I've seen is the main branch is free, licensed MIT or whatever, but if you want release artifacts that are tested - then you pay for it. You can always compile your own.
Hard disagree. I feel like I'm thinking a lot more now because I have so many parallel projects going on at the same time. AI has allowed me to really, truly create in a way that I've never done before. Yes, my coding skills probably aren't as sharp as they used to be, but my system design skills are at an all time high. Don't blame the tool.
If 1% of people using the tool end up like you, and 99% end up drooling invalids, I think it would be insane to not blame the tool. If a tool that's incompatible with humans isn't to blame for that incompatibility, what is to blame for the harm done? Human nature? The point of a tool is to be used by humans.
What part do you disagree with? It sounds like you don’t disagree with either the title of the article or its contents.
> In talking to engineering management across tech industry heavy-weights, it's apparent that software engineering is starting to split people into two nebulous groups:
> The first group will use A.I. to remove drudgery, move faster, and spend more time on the parts of the job that actually matter i.e. framing problems, making tradeoffs, spotting risks, creating clarity, and producing original insight.
I work with others who have made this same claim. For those people, when I observed their work during demo days the unmentioned thing is that they were going to the AI for system design questions as well. This was framed as "just using it as a sounding board" but what was actually done was not merely a sounding board but instead was asking for solutions. Anchoring bias being what it is, these felt like good ideas and they kept them.
Its the feeling of having done a lot of thinking for themselves without having actually done so.
I actually have gone to the AI repeatedly for system design solutions.
Daily.
I think only twice have I agreed with it.
Like the way it will always give you code if you ask, even if the code is crap, it will always give you a design if you ask. Won't be a good design, though.
So you'll have a beautifully designed system with rotting bones? A system constrained to the same patterns seen in training data. Not terrible, good enough.
I don't know, I don't doubt you're more productive. Broadly so. But the depth and rigor I think may be missing, as the article suggests.
As an aside, I suppose it's a good time for those nearing the end of their careers, those who no longer need to learn, to cash out and go all in on AI.
For how many different parallel projects can you really keep proper mental model in your head at one time? Or put enough effort to seriously consider all aspects. I think number varies between simple and more complex. But still, could that number be lower than many think it is?
It really depends on who you consider the "many" to be. I've seen people who claim they can meaningfully iterate on 10 projects simultaneously, and I'm skeptical of that. My personal experience is that my decisions are noticeably degraded at 3-4 parallel workstreams, and with even the simplest projects I'm non-functional past 6.
But I can juggle 2 workstreams in a day easily, and I can trivially swap projects in and out of the "hot path" as demanded by prioritization or blockers; before LLM coding both of those were a lot harder.
The real question is whether you'd be able to continue doing your work if someone took your toys away and said "here's a nickel, kid, go buy yourself a real computer". I'm not referring to whether you'd be able to keep up your productivity since it is clear you couldn't just like a carpenter with a nail gun works faster than one with a hammer and a bucket'o'nails. Could you do the work, starting with the design followed by boiler plate and finishing with a working system? The carpenter could, albeit slower since his tools only speed up the mechanics of his work. Coding agents do much more than that, they take away part of the mental modelling which goes into creating a working system. The fancier the tool, the more work it takes out of your hands. Say that the aforementioned toy thief comes by in a year or two after the operating systems (etc.) you're targeting have undergone a few releases with breaking changes. A number of APIs have been removed, others have been deprecated and new ones have been added. You were used to telling the agent to 'make it work on ${older_versions} as well as ${newest version} but now you're sitting there with a keyboard at your fingertips and that stupid cursor merrily blinking away on the screen. How long would it take you to become productive again? What if the toy thief waits 5 years before making his heist? What if the models end up rebelling or sink into depression and the government calls upon you to save your economic sector?
When cars first appeared it took quite some knowledge and experience to even get the things started, let alone to keep them running. Modern cars are far better in all respects and as a result modern drivers often don't have a clue what to do when the 'Check Engine' light appears. More recent cars actively resist attempts by their owners to fix problems since this is considered 'too dangerous' - which can be true in case of electric cars. That's the cost of progress, it is often worth it but it does make sense to realise what it would take to go back in time to the days when we coded our software outside in the rain, upphill both ways with only a cup of water to quench our thirst. In the dark. With wolves howling in the woods. OK, you get my drift.
Will there be something like 'software preppers' who prepare for the 'AIpocalypse' by keeping their laptops in shielded containers while studiously chugging along without any artificial assistance. Probably. As a hobby, at least, just like there are 'survivalist preppers' who make surviving some physical apocalypse their goal in some way or other.
I've hit my claude quota and felt a scary helplessness - but honestly, what if someone took away the toy that is the internet? or the toy that is npm or the toy that is AWS, or the toy that is C# to name a few other toys? plenty of developers can spend their entire careers focused on a single toy and be helpless without said toy.
Internet is handy but not essential, there are other ways to communicate. Back to the BBS, flash drives tied to pigeon legs, packet radio, mesh radio networking, etc. Taking away npm and its ilk would after a relatively short adaptation period probably lead to increased code quality, reliability and safety. Those who wanted to keep on importing silly dependencies could still do so, they'd just have to do it manually using whatever alternative communication method replaced the aforementioned once-upon-an-internet. Taking away AWS (etc.) would not be much of a problem, you'd have to go back to self-hosting in co-lo facilities or just 'in the basement of a few branch offices'. Take away C# and there's a whole alphabet of languages to replace it.
A better comparison would be to suggest taking away contractors and consultants. Suddenly that supermarket owner would have to write his own software or hire someone to do it while before he could just tell some external agent (...) what he wanted to do - and change the requirements weekly, and forget to mention that one important task without which the system is useless, complain to the developers about it being missing upon delivery, eventually grudgingly agree that he did not mention it, pay more, wait another month or 2 for an updated revised version, etc.
Design and coding skills are like perishable goods, use 'm or loose 'm. Once they've been lost they need to be reacquired at substantial cost in time and effort. They also need to be kept up to date or they'll loose relevancy in <voice="marketroid"> today's fast-paced world </voice>.
But is the debate about "fleshing out a system spec" or "ability to come up, plan and explore various ideas to solve problems elegantly on a budget" ? I think there's always these two sides conflated as one when discussing LLM impact on users.
> Yes, my coding skills probably aren't as sharp as they used to be
If not the tool then whose to blame? It’s very clear people that rely on LLMs for coding lose their skills. Just because you have a lot of parallel tasks going at once doesn’t mean you’re producing quality work. Who’s reviewing it? Are you just blindly trusting it?
This is similar to what we see in software architecture. There's a team that picks a framework or pattern early, then builds everything on top of it, and by the time evidence shows up that the foundation was wrong. Now, switching costs are so high that it's cheaper to keep building on the broken foundation than to start over. The amyloid hypothesis reminds me of technical debt. The "cabal" wasn't conspiring, but they were just rationally protecting their sunk costs, same as any engineering org that can't migrate off a bad database choice.
reply