I have mentioned this in a few comments: for my CS classes I have gone from a historical 60-80% projects / 40-20% quizzes grade split, to a 50/50 split, and have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes
Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:
And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.
AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.
"It is impossible to design a system so perfect that no one needs to be good."
I had planned to move towards projects counting towards the majority of my CS class grades until chatgpt was released, now I've stuck with a 50/50 split. This year I said they were free to use AI all they liked (as if I can do anything about it anyway) , then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas.
Would like to give them some guidance on how to get AI to help prepare them for their interviews next year, will definitely take a look at your AGENTS.md approach.
> Then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas.
This is amazing and wish professors had done this back when I did CS in the late 1990s.
i would absolutely love to do individual interviews, but I have three classes of 50-80 students each and, at 10 minutes per interview, that would be ~35 hours worth of interviewing and there just isn't time to do that given the schedules of the students, etc.
my feedback has been pretty good on the in-person quizzes, we just had our first set
I'm not sure I agree with the example interactions.
If a lecturer prepared slides with basically an x86 assembly to show how to loop, what is so bad about an AI regurgitating that and possibly even annotating it with the inner workings.
> AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations
I feel like this is still underappreciated. Awesome meaningful diagrams with animations that I would take me days to make in a basic form can now be generated in under an hour with all the styling bells and whistles. It's amazing in practice because those things can deliver lots of value, but still weren't worth the effort before. Now you just tell the LLM to use anime.js and it will do a decent job.
You seem like a great professor(/“junior baby mini instructor who no one should respect”, knowing American academic titles…). Though as someone whose been on the other end of the podium a bit more recently, I will point out the maybe-obvious:
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.
IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…
For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)
> When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night…
Millions of students prior to the last few years figured out how to manage conflicting class requirements.
> Millions of students prior to the last few years figured out how to manage conflicting class requirements.
Sure, and they also didn't have an omniscient entity capable of doing all of their work for them in a minute. The point of the GP comment, in my reading, is that the temptation is too great.
Ok well hopefully the new kids just grit their teeth and ignore the changing conditions, then? Surely once we inform them that they should be better people they'll snap out of having personal failings!
Yes, I expect that pressure will be there, and project grades will be near 100% going forward, whether the student did the work or not.
This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.
I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.
The quizzes are still somewhat difficult (and fairly frequent) so you have to still get your stuff done (and more consistently than the cramming encouraged by a big midterm/final)
I do spaced repetition in lectures, my homeworks are typically programming problems and, as I said in OP, rely on the student committing to doing them w/o AI. So spaced repetition of the most important topics on quizzes seems reasonable. (It's an experiment this semester)
Do you find advocating for AI literacy to be controversial amongst peers?
I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.
There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)
AI is extremely dangerous for students and needs to be used intentionally, so I don't blame people for just going to "ban it" when it comes to their kids.
Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.
I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.
People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.
You're right that few high school educators would have any sense of all that.
Before I was taught to use a graphing calculator, we learned by using graph paper and a pencil to plot linear equations. Once we had that mastered, we were taught how to use the calculators so we could be more efficient with our time and move on to more complex topics.
The sycophancy is an artifact of how they RLHF train the popular chat models to appeal to normies, not fundamental to the tool. I can't remember encountering it at all since I've started using codex, and in fact it regularly fills in gaps in my knowledge/corrects areas that I misunderstand. The professional tool has a vastly more professional demeanor. None of the "that's the key insight!" crap.
If a student uses AI to simply code-gen without understanding the code (e.g. in my compilers class if they just generate the recursive-descent parser w/Claude, fixing all the tests) then they are robbing themselves of the opportunity to learn how to code.
In OP I showed an AGENTS.md file I give my students. I think this is using AI in a manner productive for intellectual development.
Not OP, but I would imagine (or hope) that this attitude is far less common amongst peer CS educators. It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. The best-positioned students will be the ones who can operate these tools effectively but with a critical mindset, while also being able to do without AI as needed (which of course makes them better at directing AI when they do engage it).
That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.
> It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now,
That's true, but you can't use AI in coding effectively if you don't know how to code. The risk is that students will complete an undergraduate CS degree, become very proficient in using AI, but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results.
My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder.
> My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder
Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.
> but you can't use AI in coding effectively if you don't know how to code
Depends on the LLM. I have a fine-tuned version of Qwen3-Coder where if you ask it to show you to compare to strings in C/C++, it will but then it will also suggest you look at a version that takes unicode into account.
I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
> but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results
That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
> Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.
My students don't seem to have a problem using AI: it's quite adequate to the task of completing their homework for them. I therefore don't feel a need to complete my buzzword bingo by promoting an "AI-first classroom." The concern is what they'll do when they find problems more challenging than their homework.
> I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
You are proving my point. If the programmer doesn't know what Unicode is, then the AI's helpful suggestion is likely to be ignored. You need to know enough to be able to make sense of the AI beyond a superficial measure.
> That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
We still teach that stuff. Being an engineer requires understand the whole machine. I'm not talking about mid-level marketroids who are excited that Claude can turn their Excel sheets into PowerPoints. I'm talking about actual engineers who take responsibility for their code. For every helpful suggestion that AI makes, it botches something else. When the AI gives up, where do you turn?
> It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia.
No, it's not.
Nothing around AI past the next few months to a year is clear right now.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.
None of that is certain, of course! That's the whole point: we don't know what's coming.
I get a slow-but-usable ~10tk/s on kimi 2.5 2b-ish quant on a high end gaming slash low end workstation desktop (rtx 4090, 256 gb ram, ryzen 7950). Right now the price of RAM is silly but when I built it it was similar in price to a high end macbook - which is to say it isn’t cheap but it’s available to just about everybody in western countries. The quality is of course worse than what the bleeding edge labs offer, especially since heavy quants are particularly bad for coding, but it is good enough for many tasks: an intelligent duck that helps with planning, generating bog standard boilerplate, google-less interactive search/stackoverflow ("I ran flamegraph and X is an issue, what are my options here?” etc).
My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out.
Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that.
But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.)
And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc.
So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring.
I never understand anyone's push to throw around AI slop coding everywhere. Do they think in the back of their heads that this means coding jobs are going to come back on-shore? Because AI is going to make up for the savings? No, what it means is tech bro CEOs are going to replace you even more and replace at least a portion of the off-shore folks that they're paying.
The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire.
Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective?
Lots of variability here that literally nobody has any idea how any of it's going to go.
That is not remotely clear. Every shred of evidence I've seen is that AI at best is a net zero on productivity. More often it drains productivity (if people are checking up on it as they should) or makes the software shitty (if they don't). I genuinely don't understand how people are willing to take this broken-ass tool and go "oh yeah this has transformed the industry".
Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. I'm sure there's an amount of money for a monthly subscription that you'd either scale back your use or consider other alternatives. LLM as technology is indeed out of the bottle and here to stay but the current business around it is is not quite clear.
I've pondered that point, using my monthly car payment and usage as a barometer. I currently spend %5 on Ai compared to my car, I get far more value out of Ai
Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more.
Local models are already good enough to handle some meaningful programming work, and they run very well on an expensive-but-not-unattainable PC. You could cheat your way through an undergrad CS curriculum with Qwen 80b, certainly, including most liberal-arts requirements.
The genie is not going back in the bottle no matter what happens, short of a nuclear war. There is no point even treating the possibility hypothetically.
Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc.
Plenty of tech becomes exploitative (or more exploitative).
I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.
Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.
You're crossing products with technology, also some cherry picking of personal perspectives
I personally think GSuite is much better today than it was a decade ago, but that is separate
The underlying hardware has improved, the network, the security, the provenance
Specific to LLMs
1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving
2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential
I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto)
Jumping from someone using a word to assigning a pejoritve label to them is by definition a form of bigotry
Democratization, the way I'm using it without all the bias, is simply most people having access to build with a tool or a technology. Would you also argue everyone having access to the printing press is a bad thing? The internet? Right to repair? Right to compute?
It's at most self loathing. I'm on HN, so I'm a tech bro. Or I'm bigoted against my own people, which I'm perfectly fine with.
> Why should we consider Ai access differently?
Because people with money, let's call them "capitalists", are using the internet and AI to consolidate power while putting nothing in the place of the things they're replacing. They need a lot of fellow travelers, which are abundant, anyway.
What's different about the internet? Mobile devices, AI? They're everywhere and being used as part of a surveillance network that has absolutely nothing in common with the printing press and literacy.
I hope you're right and we all end with our smart, super capable, private AI. I don't see it happening. Everyone uses Gmail, WhatsApp, Discord, Uber, etc. I don't f** care about 1% that don't, they have no power, no influence. At large, they don't exist.
Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term.
No such studies can exist since AI coding has not been around for a long term.
Clearly AI is much faster and good enough to create new one-off bits of code.
Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing.
Now with AI coding these take me just a few minutes, done.
But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed.
I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code.
How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see.
>it doesn't waste all that extra benefit by being that much harder to maintain in the long term.
If AI just generates piles of unmaintainable code, this isn't going to be any worse than most of the professionally-written (by humans) code I've had to work with over my career. In my experience, readable and maintainable code is unfortunately rather uncommon.
Yeah. At this point, at the start of 2026, people that are taking these sorts of positions with this sort of tone tend to have their identity wrapped up in wanting AI to fail or go away. That’s not conducive to a reasoned discussion.
There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops.
If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people
> I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords.
For me, modern AI appears to be controlled entirely by oligarchs and corporate overlords already. Some of them are the same who already shackled us. This time will not be different, in my opinion.
I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow?
I think that even if inference is "not really that expensive", it's not free.
I think that Microsoft will not be willing to operate Copilot for free in perpetuity.
I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.
I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.
I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".
I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity.
In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.
Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.
So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?
Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.
The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.
The first half of your post, I broadly agree with.
The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.
Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.
Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in?
There is so much confusion on this topic. Please don't spread more of it; the answers are just a quick google away. To spell it out:
1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens.
2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types.
3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here.
without training new models, existing models will become more and more out of date, until they are no longer useful - regardless of how cheap inference is. Training new models is part of the cost basis, and can't be hand waved away.
Only if you’re relying upon the models to recall facts from its training set - intuitively, at sufficient complexity, models ability to reason is what is critical and can have its answers kept up to date with RAG.
Unless you mean out of date == no longer SOTA reasoning models?
'ability to reason' implies that LLMs are building a semantic model from their training data, whereas the simplest explanation for their behavior is that they are building a syntactic model (see Plato's Cave). Thus without new training they cannot 'learn', RAG or no RAG.
If you're using the models to assist with coding—y'know, what this thread is about?—then they'll need to know about the language being used.
If you're using them for particular frameworks or libraries in that language, they'll need to know about those, too.
If training becomes uneconomical, new advances in any of these will no longer make it into the models, and their "help" will get worse and worse over time, especially in cutting-edge languages and technologies.
LLMs will stop being trained, as that enormous upfront investment will have been found to not produce the required return. People will continue to use the existing models for inference, not least as the (now bankrupt) LLM labs attempt to squeeze the last juice out of their remaining assets (trained LLMs). However these models will become more and more outdated, less and less useful, until they are not worth the electricity to do the inference anymore. Thus it will end.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away
That's not going to happen. It's already too late to consider that a realistic possibility.
> I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.
This is my exact experience as well and I find it frustrating.
If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.
Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.
It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?
"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.
I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be.
I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...
To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..
Only if you naively throw AI carelessly at it. It sounds like you havent mastered the basics like fine-tuning, semantic vector routing, agentic skills/tooling generation…dozens of other solutions that robustly solve for your claim.
everything you learn about math is completely obsoleted by ai five years from now
everything you learn about working using chatbots is completely obsoleted by ai five years from now
both are possible, but 2 is pretty much guaranteed if we get 1, so learning about chatting with opus is pretty much always less useful than learning derivatives by hand unless you're starting job applications in less than a few months
I think that's a great approach. I've thought about how to handle these issues and wonder how you handle several issues that come to mind:
Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale?
And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks?
Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged?
As someone who has taught CS before, I just wanted to say thanks for doing all this for your students. They don't understand how much work you are putting in, but I'd like you to know that at least one other person does.
Thanks for taking the time for your students. Your students will thank you, too, but that will be years from now.
We just had our first set of in person quizzes and I gave them one question per page, with lots of space for answers.
I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc.
It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling.
> I have also given them examples of good answers: terse and to the point
Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.
Reasonable accommodations have been made for students with disabilities for decades now. While there might be some cases where AI might be helpful for accommodating students, it is not, nor should it be, a universal application because different disabilities (and different students) require different treatment and support. There‘s tons of research on disability accommodations and tons of specialists who work on this. Most universities have an entire office dedicated to supporting students with disabilities, and primary and secondary schools usually have at least one person who takes on that role.
So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support.
We have a testing center at Montana State for situations like this. I deliver my tests in the form of a PDF and the testing center administers it in a manner appropriate for the student.
It's a question that's too vague to be usefully answered especially on a forum like this.
There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs.
Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or...
In the context of the immediate problems of AI in education, it's not a relevant thing to bring up. Finding ways for students with disabilities to succeed in higher education has been something that institutions have been handling for many decades now. The one I attended had well defined policies for faculty and specialist full time staff plus facilities whose sole purpose was to provide appropriate accommodations to such students and that was long, long ago. There will undoubtedly be some kind of role in the future for AI as well but current students with disabilities are not being left high and dry without it.
Because it’s another nonsensical “think of the children” argument for why nothing should ever change. Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them.
Reasonable accommodations absolutely should be made for children that need them.
But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat.
Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:
https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac
And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.
AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.
"It is impossible to design a system so perfect that no one needs to be good."