HN2new | past | comments | ask | show | jobs | submitlogin

LLMs already have a confidence score when printing the next token. When confidence drops, that can indicate that your session has strayed outside the training data.

Re:contradictory things: as LLM digest increasingly large corpuses, they presumably distill some kind of consensus truth out of the word soup. A few falsehoods aren’t going to lead it astray, unless they happen to pertain to a subject that is otherwise poorly represented in the training data.



I hope they can distill this consensus truth, but I think it is a tricky task; I mean human historians even still have controversies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: