Actually, LLMs are completely deterministic. Their output is a list of possible word, ordered by probability. If you always choose the highest-ranked word for the next iteration, they will always generate the same sequence for the same prompt.
In all current implementations, a small amount of randomness (called "temperature") is added. In practice, the higher the temperature, the more probable is that a word down the list is selected, rather than the top one.