Hacker News .hnnew | past | comments | ask | show | jobs | submit | neals's commentslogin




I had to code something on a plane today. It used to be that you couldn't get you packages or check stackoverflow. But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff. Crazy how fast that goes.

All skill degrade with disuse. For example, here in Canada we have observed a literacy and numeracy skills curve that peaks with post-secondary education and declines with retirement.[0]

Use it or lose it, as it were.

0: https://www150.statcan.gc.ca/n1/daily-quotidien/241210/dq241...


That is one factor, but it’s not the whole thing. The other key element is “cognitive offloading” where your brain stops doing stuff when it thinks it is redundant.

This is similar to the photo-taking impairment effect where people will remember an event more poorly if they took photos at the event. Their brain basically subconsciously decides it doesn’t need to remember the event because the camera will remember the event instead.


The more the role of the tool, the less the role of the craftsman.

If the tool is reliable, it's a win. Saved brain power doesn't disappear, it can be applied elsewhere.

If the tool is powerful enough to do a better job than our brains would, it's a big win. In fact, we built the entire technological civilization on one such fundamental tool: writing.

Or from another perspective: our brains excel at adapting to the environment we find ourselves in. The tools we build, the technology we create, are parts our environment.


This argument has held up in the past but there’s no certainty that during this current period where LLM’s are not perfect (and in many cases far from perfect) - they can ever become perfect that it’s fine for one’s existing human capital to depreciate.

i thought the point was to depreciate the human capital

make stock number go up and up and up and people get in the way


It’s our job not to fall for the trap if that is what is being said behind closed doors ;)

Edit - lol @ the bozo who downvoted my post. Is that you scam Altman?


I don’t think this holds for all tools in all situations. Sometimes the tools can do too much, especially when they start to do creative things.

In my 7th years of professionally programming node, not even once I remember the express or html boilerplate, neither is the router definition or middleware. Yet I can code normally provided there's internet accessible. It's simply not worth remembering, logic and architecture worth more IMO

Einstein famously refused to learn people's phone numbers, stating that he could look them up in the phonebook whenever he needed it.

I don't think there is that much value in memorizing rarely used, easily looked up information.


The problem is that lazy people use the supposed Einstein quote as a convenient excuse to not know and internalize knowledge about their own profession. You can bet that Einstein memorized the relevant mathematics for his work thoroughly and completely.

Oh didn't you know - the legend has it, he was also bad at maths in elementary school, so that kid of your friends, failing the math this year, may be a genius too :)

> I don't think there is that much value in memorizing rarely used, easily looked up information.

easily looked up - we don't have that any more since Google decided to entshittify search. What you now have is not looked up information, as that would look the same each time you "looked it up" - instead of a quick 3-4 word search pattern you now write an elaborate verbose "query", and get a chewed up re-interpretation by the shitty LLMs. And then since sometimes its not quite what you asked for, you have to ask it again or redirect it and just like that, you've wasted 5 mins of your time arguing with a goddamn neural network!


And this is why I pay for Kagi. It's in the top5 of services I cancel last if money is tight.

This is something I've been curious about for a long time now. I would happily pay their top tier, but testing their free tier did not seem to produce much different results compared to using google ? Or was I not using it right? Just typing in same stuff and literally getting the same list of results. If you want to share more, I'd be happy to know.

"testing their free tier did not seem to produce much different results compared to using google"

So Kagi gave you ads, sponsored links, AI generated answers etc. as top results? =)

It shouldn't, or you've registered for a scam site. That's the difference. With Kagi it feels like I'm using Google from 10 years ago. On Kagi I can go "searchword -notthisone +thshastobethere inurl:forum" and it actually works.

You can also manually downrank sites in the settings so that they never appear in your results (as I've done for pinterest etc that are 99% crap, but have excellent SEO). Or boost sites with reliably good results.


> So Kagi gave you ads, sponsored links, AI generated answers etc. as top results? =)

No, I did not register for kaggi.search if that's what you are implying.

This was about ~12 months ago, so no, the AI-generated summaries were not a thing you would expect to see at that time, outside of ocassional A/B experiments. Now maybe my expectation is different to yours. I'd expect I dont have to do much tweaking or downranking at all. My pre-2019 google experience had been roughly "i type in 3-4 words" and one of the first three links is exactly what I searched for. Kagi did not deliver that, much as I would love it to. But with Kagi relying mainly on google search index, it seems to me there is only so much they can do anyway, apart from users ranking and downranking stuff...which I am not keen to do...


Actually Kagi uses Bing and Yandex as the backend, not google.

It also tends to surface niche sites more than Google for me.

It also takes maybe 30 seconds to downrank a site and you'll never see it again, unlike Google that will keep giving you shitty "review" sites, Pinterest etc as results.


I know about Yandex ... but I am fairly sure I read the bulk of its search is based on googles index. I know downranking is not much work, but I just dislike the idea of having to work for it.

Agreed, it interests me how much some people emphasise knowing facts - like dates in history or dictionary definitions of words.

Facts alone are like pebbles on a beach, far better (IMO) to have a few stones mortared with understanding to make a building of knowledge. A fanciful metaphor but you know ...


Knowing facts matters quite a lot imo, even if it doesnt 'seem' like it.

To use another metaphor, you can't REALLY see the forest amongst the trees, if you don't consider the trees themselves.

One of the reasons I like history so much is because, with enough facts accumulated, you can see how one piece of information flows into another - e.g. dates matter, because knowing the precise order in which important events occur helps you determine how those events may or may not have affected each other in the course of their unfolding.

Sure memorizing dates is boring on its own, but putting them in contexts is exciting - you still need to comb the beaches to find the right stones!


I accept the ordering of dates is important, yes. History can be in the details, but as you say you need to comb the beach for the right stones.

I guess an interesting counterpoint to what I said is something like https://en.wikipedia.org/wiki/Phantom_time_conspiracy_theory (and similar) where a grandiose framework tries to fit inconvenient facts into a shape that is entirely invented.


This is an entirely false dichotomy though, is it not? One can both know facts and understand logic behind them, it's not like you're creating an RPG character and need to make a choice with limited character points.

(Can't say time is the limiting factor either -- we're both in HN comments, valuing our own time at zero.)


I'm not an expert, however what I believe is brain has limited capacity, and old memories keep being deleted when unused after long time. It is impossible to remember everything unless you have photographic memory. It makes remembering facts like syntaxes challenging and most of the time useless, and keeping logic is better in the long run.

Let's for example about html boilerplate, where you don't remember the syntax. What you remember is the components & why they are needed, then add them one by one as you recall your memory. Doctype, html tag, head, body, etc. It works because html is simple and common.

Then for express it is harder, because you need to recall javascript syntaxes and express syntaxes, and most of the time you don't get involved with express outside req and res. You recall that express need body parser, register routers, and finally listen, whether you use http server first or directly from express. Now you compose one by one, looking at docs or web for the forgotten pieces, but you don't lose the understanding / logic of express, you just forget the syntaxes.

As for stream where I keep forgetting it, I just need to remember that stream need source, event handler such as on data, error, finish / end. Pipe if needed. However I never remember whether to use writable, readable, streamable, etc because I seldom get involved with them, and can look up for references anytime.


Yes I was not clear, it seems. Facts are necessary but not sufficient.

There is limited time, of course - no one can learn everything, but you can pay attention to the important facts, and the connections between them.

In some ideal world you would learn every fact there is, and the connections would fall out on their own, but in the real world we have to construct theories and frameworks to organise facts.


And it ignores the fact that, if you refuse to remember any facts because they can be looked up, you'll be unable to form any new ideas because you'll know nothing, and you won't know what is out there to be looked up.

And of course, what if your phone dies?


I remember the "HTML boilerplate", because I don't see it as boilerplate. I don't memorize it, I reconstruct it from the base concepts.

> Yet I can code normally provided there's internet accessible.

I'm the opposite. Yes I need my computer to test things fully, but I'm able to code on paper. I want my computer to be a complete sufficient node, so I mostly install documentation and my computer is mostly not connected to the internet, unless I actively enable it to do a specific thing.


I thought this comment was going the opposite way - previously no internet/googling but now you can run a local model and figure things out without the need for internet at all

Mine as well. 2 years ago my mind was blown that I could code in a language I didn’t know (scala) while on a log train ride with no internet (Amtrak) using a local model on a laptop. Couldn’t believe it.

The staggeringly effective compression of LLMs is still under appreciated, I think.

2 years ago you had downloaded onto your laptop an effective and useful summary of all of the information on the Internet, that could be used to generate computer programs in an arbitrarily selected programming language.


Yes! Continuing on thoughts of LLM compression, I'm now convinced and amazed that economics will dictate that all devices contain a copy of all information on the Internet.

I wrote a post about it: Your toaster will know mesopotamian history because it’s more expensive not too.

https://wanderingstan.com/2026-03-01/your-toaster-will-know-...


Fairly certain the least expensive option will always be a dumb toaster that just plugs into the wall

I chose a toaster specifically because it's about the simplest electrical device out there, and thus pushes the thesis to the extreme. But smart toasters are pretty common: https://revcook.com/products/r180-connect-plus-smart-toaster...

And as other commenter pointed out, a smart toaster with ads or data collection can be subsidized and thus be more profitable. (Oh what a world we're headed for!)

In any case, I think the LLM-everywhere thesis holds even strong for even moderate-complexity devices like power plugs, microwaves, and mobile phones.


But in that case, it won't be subsidized by the manufacturer!

I'm sure people would get a cheaper toaster in exchange of an ad being burned in your bread.


Shhhhhhhhh! Don't give them any more bad ideas, sheesh!

Not if it requires the toaster company to maintain a different SKU without the LLM chip and sells very few units.

> Your toaster will know mesopotamian history because it’s more expensive not too.

But will it know the difference between too and to?


I got excited about that, until I actually tried to download a model and run it locally and ask it questions. A current gen local LLM which is small enough to live on disk and fit in my laptop's RAM is very prone to hallucination of facts. Which makes it kind of useless.

Ask your local model a verifiable question - for example a list of tallest buildings in Europe. I did it with Gemma on my laptop, and after the top 3 they were all fake. I just tried that again with Gemma-4 on my iphone, and it did even worse - the 3 tallest buildings in Europe are apparently the Burj Khalifa, the Torre Glories and the Shanghai Tower.

I wouldn't call that effective compression of information.


Yea, it's not an encyclopedia of facts. Language models store the FEELS of the data in vectors (or angles in Gemma4's case, it's a cool thing) not the exact string.

But what you can do with local models is give them actual data and tools to search it. Download a copy of Wikipedia locally, give the agent a way to search it and BOOM accurate information without an internet connection.

Also "small enough to live on disk" is a bit vague, especially when models get super stupid super fast when you get to the smaller size. At that point they're just basically 40k servitors that can use tools and nothing much.


I don't think any LLMs are good at accurately regurgitating arbitrary facts, unless they happen to be very common in their training, and certainly not good at making novel comparisons between them.

Others have addressed other aspects of this, but I want to address this:

> I cannot remember basic boilerplate stuff.

I don't know exactly what you mean by boilerplate stuff, but honestly, that's stuff we should have automated away prior to AI. We should not be writing boilerplate.

I'd highly encourage you to take the time to automate this stuff away. Not even with AI, but with scripts you can run to automate boilerplate generation. (Assuming you can't move it to a library/framework).


So many use cases for LLMs I've read leave me asking "did none of you have a working text editor?"

Jeez, I never remembered boilerplate stuff anyway. Losing grasp of your commonly used, slightly more involved code idioms in your key languages would probably be where I’d draw the ‘be concerned’ line. Like if I get into a car after years of only using public transit, I wouldn’t be too worried if I couldn’t immediately use a standard transmission smoothly. If I no longer could intuitively interact with urban traffic or merge onto a highway, I’d be a lot more concerned.

Lisp macros had pretty much solved the boiler plate problem decades ago.

I read the "boilerplate" in that comment as "basic" meaning "I don't know how to center a div" or "I do not know how to remove duplicates from a collection"

Does anyone know how to centre a div?

Last time I looked there were at least seven ways to do it.


I think "How to center a DIV" was at the top of HN way more times than it should have been :)

margin: auto, or flex align-items/justify-content are my go-tos

Flexbox, mate

Well both of them are easily retrieved from web search, it's not a problem if you forget one or two. I'll probably need some refreshment if I want to implement bubble sort again.

The topic refers to being on an airplane without internet....

For my money, while surely it must have been jarring, that experience would seem to say that on-device LLMs are more important programming tools than package repositories.

As another commenter said, the affordability of LLM subscriptions (or, as others are predicting, the lack thereof) is the primary concern, not the technology itself stealing away your skills.

I am far from the definitive voice in the does-AI-use-corrupt-your-thinking conversation, and I don't want to be. I don't want LLMs to replace my thinking as much as the next person, but I also don't want to shun anything useful that can be gained from these tools.

All that said, I do feel that perhaps "dumber" LLMs that work on-device first will allow us to get further and be better, more reliable tools overall.


This conversation keeps missing me because I don't think I've typed out boilerplate in like 20 years.

Were people actually physically typing every character of the software they were writing before a couple of years ago?


Interesting. I don’t think my brother has written boilerplate in 20 years either. He’s a chef.

I on the other hand am a software engineer, so writing code is part of the job title.


Right but, were you really not using snippets and templates before LLMs?

Company? Helm? Whatever vi uses that's like company and helm? Haven't IDEs written function calls for you for like decades now?


I very rarely use autocomplete on Emacs, except hippie-expand. I have yasnippet installed but I have never used it. I just checked, it's not even key bound to anything.

I work on greenfield projects so I see my fair share of boilerplate, but honestly, it's just a minute part of work that's almost meditative to write a little bit of trivial code (i.e. a function signature) in between sessions of hard thinking. Writing boilerplate is very far down the list of things I seek to optimize.

Also I don't use LLM.


Yes, I write a lot of things in notepad++ actually, every single character. It really feels like a craft and gives me the same joy as others experience with woodworking or gardening.

Couple of years ago I was (as a human being, not my career span) 20. Spare for the usual StackOverflow / blog snippets, that was my experience and I suppose most of those just starting out. I think it's very recent to have fresh grads that barely type code themselves.

Yes. Until the RSI sets in. Then you learn copy-paste, templating, abstracting and aliases real fast!

Will you do anything differently knowing this? Does the risk of LLMs being unaffordable to you in the near future make you wary about losing the skills?

Open Models are currently within reach for most of the kind of writing I do I still decide what and why it generates what it does, I just don't do it manually

I'm not super worried, either I still do the last leg of the work, or I go back an abstraction level with my prompts and work there


The person I replied to couldn't code on a plane, so presumably either doesn't use or doesn't like open models

I don't think it would take very long to regain those skills either.

Yes, they do come back faster than learning from scratch. However, what’s possibly worrying is that our brains atrophy some faculties if we decided to skip the learning part altogether.

> But now, I'm useless. My mind has turned to pudding.

I do use AI daily to help me enhance code but then... I also very regularly turn off, physically, the link between a sub-LAN at home and the Internet and I still can work. It's incredibly relaxing to work on code without being connected 24/7. Other machines (like kid's Nintendo switch) can still access the Internet: but my machines for developing are cut off. And as I've got a little infra at home (Proxmox / VMs), I have quite a few services without needing to be connected to the entire world: for example I've got a pastebin, a Git server, a backuping procedure, all 100% functional without needing to be connected to the net (well 99% for the backuping procedure as the encrypted backup files won't be synch'ed with remote servers until the connection is operational).

Sure it's not a "laptop on a plane", but it's also not "24/7 dependent on Sam Altman or Anthropic".

I'll probably enhance my setup at some point with a local Gemma model too.

And all this is not mutually exclusive with my Anthropic subscription: at the flick of a switch (which is right in front of me), my sub-LAN can access the Internet again.


I'm old enough to have programmed C in IBM/PC with the book Turbo C/C++ [1] at my desk side as reference, around 1993.

I remember at that time, my "mentor" suggested to memorize all the "keywords" from C (which were few). But given my bad memory I had to constantly look at the book.

Aaah how times have changed.

[1] https://openlibrary.org/books/OL1601323M/Turbo_C_C


I thought the same but I tried to create a small Django project with APIs, small React frontend from scratch, no LLM, no autocomplete, just a text editor. I was surprised it was all still there after a couple of hours. Not sure it's a skill that useful today, it feels like remembering your multiplication table.

I haven't been able to code without reading a sample of code for years before AI. Maybe it's just what happens when you're polyglot but I remember thinking even stupid things like how to declare a class in whatever lang I had to see. But once I saw a sample of code I'd get back into it. Then there's stuff I never committed to memory, like the nonsensical dance of reading from a file in go, or whatever.

So I don't think this is all AI tbh.


It was a long time ago but I attended a session by IBM at an OO conference. The speaker's claim was that the half-life of programming language knowledge was 6 months i.e. if not reinforced, that how fast it goes.

I learned the Q array language five years ago and then didn't touch it for six months. I was surprised how little I remembered when I tried to resume.


Maybe it's my memory issues, but I personally could never remember basic boilerplate. 30 years ago I would spend half of my time in Borland's help menu coupled with grepping through man pages. These days I use LLMs, including ollama when on a plane. I don't feel worse off.

I keep seeing this repeated but isn’t it a good thing you don’t remember boilerplate? This is not information that deserves to be memorised.

The fact that this is being called out is strange.


I haven't written complex code for so long I forgot how I used to type && on my keyboard. Wild times.

anecdata that is an apropos to the actual submission is somehow at the top of the thread with 100 (now +1) replies

Honestly, you shouldn't be working on a plane. This thing where people are plugged in all the time is just insane.

Yes, you lost some abilities. Install local model so you have someone to talk to while you are on the plane ;)


If this was me you couldn't waterboard this info out of me.

Why? Is this is because of shame or fear of losing your job?

Because the info is no longer in their brain.

Because its incredibly embarrassing to admit you can no longer do very basic programming tasks as a "professional" in that field.

I think it's a matter of what "very basic programming tasks" actually mean keeps sliding across the years. Surely in the beginning, being able to write Assembly was "very basic programming tasks" but as Algol and Fortran took over, suddenly those instead became the "very basic programming tasks".

Repeat this for decades, and "very basic programming tasks" might be creating a cross-platform browser by using LLMs via voice dictation.


Skill atrophy is intrinsically embarrassing, no matter what those skills are. I am embarrassed to admit that I have forgotten a lot of how to hand-optimize C code with inline assembly, even though few people do that anymore.

Soon everyone will run local models for simple stuff like that.

Good news for us Luddites. Keep it up.

boiler plate stuff will be like assembly code in the past, you won't need to use it.

Really? How long you've been a developer? I've been almost exclusively doing "agent coding" for the last year + some months, been a professional developer for a decade or something. Tried just now to write some random JavaScript, C#, Java, Rust and Clojure "manually" and seems my muscle memory works just as well as two years ago.

I'm wondering if this is something that hits new developers faster than more experienced ones?


Probably depends on the individual. Senior developer here and I've always offloaded boilerplate and other "easy to google" things to search engines and now AI. Just how my brain and memory work. Anything I haven't used recently isn't worth keeping (in my subconscious mind's opinion anyway).

Yeah, having to look up the "basic boilerplate" stuff is not worse for me after starting to use AI than it was beforehand.

Experience isn't the problem. I have 20+ years of C++ development, built commercial software in Java, Rust, Python, played with assembly, Erlang, Prolog, Basic.

Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.

Luckily the right idioms came back after couple of hours, but the experience gave me a big scare.


> Played with these coding agents for the last couple weeks and instantly noticed the brainrot

Very interesting, wonder what makes our experiences so different? For you "playing for a couple of weeks" have a stronger effect than for me after using them almost exclusively for more than a year, and I don't think I'm an especially great programmer or anything, typical for my experience I think.


Same for me. Been fully agentic for half a year or so, still remember the myriad of programming languages and things just as well if there's no AI present at all. Hard to shake 15 years of experience that quick, unless maybe that experience never fully cemented?

Maybe the difference between actually knowing stuff vs surface level? I know a lot of devs just know how to glue stuff together, not really how to make anything, so I'd imagine those devs lose their skills much faster.


It a side effect of using AI.

People using AI for tasks (essay writing in the MIT study linked below) showed lower ownership, brain connectivity, and ability to quote their work accurately.

> https://arxiv.org/abs/2506.08872

There was a MSFT and Carnegie Mellon study that saw a link between AI use, confidence in ones skills, confidence in AI, and critical thinking. The takeaway for me is that people are getting into “AI take the wheel” scenarios when using GenAI and not thinking about the task. This affects people novices more than experts.

If you managed to do critical thinking, and had relegated sufficient code to muscle memory, perhaps you aren’t as impacted.


It's probably too much inside baseball to merit a study, but I'm curious if the results would change for part-time coders. When I'm not coding, I'm writing patents, doing technical competitive analysis, team building, etc.

My theory is that if you're not full-time coding, it's hard harder to remember the boiler plate and obligatory code entailed by different SDKs for different modules. That's where the documentation reading time goes, and what slows down debugging. That's where agent assisted coding helps me the most.


SDKs and Binary format descriptors are where I see agents failing the most, they are typically acceptable for the happy path but fail at the edge cases.

As an example I have been fighting with agents re-writing or removing guard clauses and structs when dealing with Mach-o fat archives this week, I finally had to break the parsing out into an external module and completely remove the ability for them to see anything inside that code.

I get the convenience for prototyping and throwaway code, but the problem is when you don’t have enough experience with the quirks to know something is wrong.

It will be code debt if one doesn’t understand the core domain. That is the problem with the confidence and surface level competence of these models that we need to develop methods for controlling.

Writing code is rarely the problem with programming in general, correctness and domain needs are the hard parts.

I hope we find a balance between gaining value from these tools while not just producing a pile of fragile abandonware


i think your environment is a big role. with Ai you can kind of code first, understand second. without AI if you dont fully understand something then you havent finished coding it, and the task is not complete. if the deadline is too aggressive you push back and ask for more time. with AI, that becomes harder to do. you move on to the next thing before you are able to take the time to understand what it has done.

i dont think it is entirely a case of voluntary outsourcing of critical thinking. I think it's a problem of 1) total time devoted to the task decreasing, and 2) it's like trying to teach yourself puzzle solving skills when the puzzles are all solved for you quickly. You can stare at the answer and try to think about how you would have arrived at it, and maybe you convince yourself of it, but it should be relatively common sense that the learning value of a puzzle becomes obsolete if you are given the answer.


> [...] and ability to quote their work accurately.

I guess that's an advantage? People shouldn't have to burden their memory with boilerplate and CRUD code.


The task was essay writing, and the three 3 groups were No tools, search, ChatGPT.

The people who used chatGPT had the most difficulty quoting their own work. So not boilerplate, CRUD - but yes the advantage is clear for those types of tasks.

There were definite time and cognitive effort savings. I think they measured time saved, and it was ~60% time saved, and a ~32% reduction in cognitive effort.

So its pretty clear, people are going to use this all over the place.


I recently had to write a coding interview question for candidates at my current job. I wrote the outline and had Claude generate synthetic data for it. I then wrote the solution without assistance to make sure it was viable (and because I wouldn't ask somebody to do something I wouldn't myself). I had no trouble getting it done in 20 minutes in a language that I haven't used actively in around a year and a half.

I do still write stuff manually frequently - I often spend 5 minutes writing structs or function signatures to make sure that the LLM won't misunderstand or make something up. Maybe that's why I haven't lost it.


That’s exactly why you haven’t lost it.

> I'm wondering if this is something that hits new developers faster than more experienced ones?

Almost certainly, at least according to Ebbinghaus' forgetting curve.


I can tell you that I can still code Python and Haskell just fine (I did those in vim without bothering to set up any language assistance), but Rust I only ever did with AI and IDE and compiler assistance.

> random JavaScript, C#, Java, Rust and Clojure "manually"

Right, sounds very credible to me. What did you write, an addition function in each of those?


Lol, thanks (I guess?), but really isn't that hard. I don't think I know a single experienced developer who doesn't know at least 3-4 languages. I probably could add another couple of languages in there, but those are the ones I currently know best. Besides, once you've picked up a few language, most of them look and work more similarly to each other than different. From my lisp-flavored lenses, C# and Java are basically the same language for most intents and purposes.

I wrote a little toy-calculator in each, ended up being ~250 LOC in each of them, not exactly the biggest test but large enough to see if my muscle memory still works which I was happy to discover it still did.


Well - that's the thing - toy calculators are easy on the "muscle memory". The operations and data types will be mostly similar in syntax across all these languages. If however you wanted to do something more akin to a real world example, using these and those frameworks... it would probably look different. I wasn't disputing the knowledge of multiple languages btw - some of us had experiences in languages from the times way earlier than C# and Java. The point is for real world work you wont quite do toy calculators and the people pushing for "AI writing all the code" are not worried about you retaining the "muscle memory" to write addition and substraction functions....

That's not how I understood other's experience to be, they're describing something that won't let them even write toy calculators. Selected quotes:

> But now, I'm useless. My mind has turned to pudding. I cannot remember basic boilerplate stuff

> Played with these coding agents for the last couple weeks and instantly noticed the brainrot when I was staring at an empty vim screen trying to type a skeleton helloworld in C.

This is very different from what I'm (not) experiencing. My test was for if I can remember the basic syntax of the language itself, I was never a big framework user, so of course using a framework is about the least interesting test I could do of myself.

Instead, I did the bare minimum to see if my "mind has turned to pudding" or "instantly noticed the brainrot", which would have been visible even for a toy calculator, obviously.

> the people pushing for "AI writing all the code" are not worried about you retaining the "muscle memory" to write addition and substraction functions

What are they worried about then? From your perspective, sounds like they're worried about "using these and those frameworks" but that's far from "real world work" in my experience, and really the least interesting thing you could remember as a developer.


> I was never a big framework user,

So you only ever wrote code in academic setting? Not being sarcastic, but no realistic software development in commercial setting will happen without frameworks.

> That's not how I understood other's experience to be, they're describing something that won't let them even write toy calculators. Selected quotes:

Well that is literally not what they are describing. The man said "boilerplate code". Again, unless most of the code you wrote was code used in purely academic setting, whether research or teaching, we all understand boilerplate code to mean something like "code to open a file descriptor / iterate through database rows / poll web server for results" etc. So the typical stuff you would implement while relying on abstractions and concepts someone else already defined for you.

> What are they worried about then?

If I had to guess - they are worried about the growing LLM-backlash working against their hoped-for-industry-wide-adoption and their infinite money at some point running out, because at the end of the day, NVIDIA lending OpenAI 100B to invest in MS Azure and Azure using those 100B to purchase NVIDIA chips...is starting to look a lot like circular financing.

> sounds like they're worried about "using these and those frameworks" but that's far from "real world work" in my experience

Again, if your area of work is academic or teaching, then this probably matches your own real world experience. The problem for the LLM-crowd is that there is a lot more people here whose "real world experience" are not toy calculators and isolated algorithm implementation, but actually yes, using "this and that framework", as otherwise the implementation would take un-realistically long. I may know how TLS works, but would I implement it's handshake routine on my own? Only if I did it as a personal excercise, not in a commercial scenario. Same for UI, same for DB ORMs, etc. You name it.


probably a junior/semi sr developer?

I guess writing code is now like creating punch-cards for old computers. Or even more recently, as writing ASM instead of using a higher level language like C. Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.

> Now we simply write our "code" in a higher language, natural language, and the LLM is the compiler.

No we don't and we never should actually, compilers need to be deterministic.


It needs to be something stronger than just deterministic.

With the right settings, a LLM is deterministic. But even then, small variations in input can cause very unforeseen changes in output, sometimes drastic, sometimes minor. Knowing that I'm likely misusing the vocabulary, I would go with saying that this counts as the output being chaotic so we need compilers to be non-chaotic (and deterministic, I think you might be able to have something that is non-deterministic and non-chaotic). I'm not sure that a non-chaotic LLM could ever exist.

(Thinking on it a bit more, there are some esoteric languages that might be chaotic, so this might be more difficult to pin down than I thought.)


Why?

Also, give the same programming task to 2 devs and you end up with 2 different solutions. Heck, have the same dev do the same thing twice and you will have 2 different ones.

Determinism seems like this big gotcha, but in it self, is it really?


> Heck, have the same dev do the same thing twice and you will have 2 different ones

"Do the same thing" I need to be pedantic here because if they do the same thing, the exact same solution will be produced.

The compiler needs to guarantee that across multiple systems. How would QA know they're testing the version that is staged to be pushed to prod if you can't guarantee it's the same ?


This is not what a compiler is in any sense.

I cringe every time I read this "punch card" narrative. We are not at this stage at all. You are comparing deterministic stuff and LLMs which are not deterministic and may or may not give you what you want. In fact I personally barely use autonomous Agents in my brownfield codebase because they generate so much unmaintainable slop.

Except that compiler is a non-deterministic pull of a slot-machine handle. No thanks, I'll keep my programming skills; COBOL programmers command a huge salary in 2026, soon all competent programmers will.

Would anybody be so kind to enlighten me with some context?


Proto-websites - technically called hypermedia, it was basically a locally stored website, and pioneers were trying things out like putting books and information and functionality in them. In this case, it's the Wintermute trilogy by William Gibson in Hypercard format, so this is also a retro computing discovery.


"The Expanded Books Project was a project by The Voyager Company during 1991, that investigated how a book could be presented on a computer screen in a way that would be both familiar and useful to regular book readers. The project focused on perfecting font choice, font size, line spacing, margin notes, book marks, and other publishing details to work in digital format."


What exactly are you unclear on?

Edit: not sure why I was downvoted, I’m literally attempting to provide whatever information you need about this, just not sure what aspect is confusing.


Your question was probably misinterpreted as sarcasm. :-(


Gives me mongodb vibes. This whole Ai coding thing too. On one side, religious loud following, on the other side the nay sayers. We'll probably end up in the middle.


Fwiw, Claude and Codex are very very good at SQL and have actually taught me some new tricks. No reason to use mongodb or firebase in 2026: https://postgresisenough.dev


Did an AI write this? Completely missed the point ^^'


I agree. I for one, welcome LLMs for sensible uses, like trivia, code boilerplate, as a content synthesizer. I wouldn't trust it with anything else, and have found that any gains obtained by speedups in code writing are offset by the extra cognitive load from digesting code I didn't write.


Flashing a bunch of qr codes should do it


But shouldn't everybody have equal access to these markets?


I think the point is that software wasn't ready and now it might be.


How does an eFuse even work?


They should make a version of that runs as an app on a phone


They have one, it's called React Native Web Native


What are we all using as assistants? I tend to copy-paste my code into Gemini. I tried some VS-code assistants, but I can't get them to do the thing I want (like look at selected text or only do small things)...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: