Is anyone else having problems with factual correctness? I had a number of 4o and o3 conversations going and those models were factually correct about a number of different subjects.
Asking GPT-5 about the same things results in wrong answers even though its training data is newer. And it won't look things up to correct itself unless I manually switch to the thinking variant.
Asking GPT-5 about the same things results in wrong answers even though its training data is newer. And it won't look things up to correct itself unless I manually switch to the thinking variant.
This is worse. I cancelled my subscription.