HN2new | past | comments | ask | show | jobs | submitlogin

Why are they non deterministic? Is randomness intentionally injected? Or intrinsic to the approach?


Actually, LLMs are completely deterministic. Their output is a list of possible word, ordered by probability. If you always choose the highest-ranked word for the next iteration, they will always generate the same sequence for the same prompt. In all current implementations, a small amount of randomness (called "temperature") is added. In practice, the higher the temperature, the more probable is that a word down the list is selected, rather than the top one.


Check out the temperature docs in the API reference manual: https://platform.openai.com/docs/api-reference/completions/c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: