LLMs already have a confidence score when printing the next token. When confidence drops, that can indicate that your session has strayed outside the training data.
Re:contradictory things: as LLM digest increasingly large corpuses, they presumably distill some kind of consensus truth out of the word soup. A few falsehoods aren’t going to lead it astray, unless they happen to pertain to a subject that is otherwise poorly represented in the training data.
Re:contradictory things: as LLM digest increasingly large corpuses, they presumably distill some kind of consensus truth out of the word soup. A few falsehoods aren’t going to lead it astray, unless they happen to pertain to a subject that is otherwise poorly represented in the training data.