HN2new | past | comments | ask | show | jobs | submitlogin

Wow, so what value is there in LLM slop exctracted from already dubious self-medication advice?




They're saying that it successfully filtered out the bit where the author told people to overdose by 40000x. I guess that's the value.

There would be value if it pointed out the mistake instead of hallucinating a correction.

IU was correctly used everywhere else in the article except that one place with the mistak, so the LLM didn't hallucinate a correction, it correctly summarized what the bulk of the article actually said.

GPT5.2 does catch it and warns to not trust anything else in the post, saying no competent person would confuse these units.

I wonder if even the simplest LLM would make this particular mistake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: