HN2new | past | comments | ask | show | jobs | submitlogin

Same thing happens with Apple Intelligence. You might join a waitlist for dinner reservations and get a text that says you'll be notified when your table is available. And then the summary will say something like "Your table is available"!

I'm the kind of person who is posting on HN about AI - I know this stuff isn't perfect and take AI summaries with the appropriate grains of salt. But I have to imagine it's insanely confusing/frustrating for a pretty sizable fraction of people.



Self-inflicted stigma, the people hyping incorrect Llm models as viable daily drivers are no better than the people sending Nigeria Prince scam emails. People don’t want wonky incorrect products that look “futuristic”. They want wonky correct products that make them seem like a futurist.

Coding does not require 100% accuracy as it gets reviewed, ideally. Many jobs however rely on getting things 99.99999999% accurate without the scrutiny of a review (not ideally).


> incorrect Llm models

What is a “correct” LLM model? They all make stuff up, no exceptions. And they always will, that’s the nature of the current tech.

> Coding does not require 100% accuracy as it gets reviewed, ideally.

We have already reached the point where too many people take every LLM code suggestion as gospel and even reject human review. Ask the curl maintainer, who is inundated by junk PRs and reports of security flaws.


I do wonder how these misunderstandings will go when they inevitably use the "on device AI" to spy and rattle on you.

Microsoft will be asking their Windows 11 AI Hypervisor CoPilot+ to look back on your keystroke history and screenshots to conceptualize if you have been involved with some sort of vilified movement. AI will hallucinate half truths and then image your hard drive and send it over to Microsoft.

Local AI, exciting stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: