HN2new | past | comments | ask | show | jobs | submitlogin

Is anyone else having problems with factual correctness? I had a number of 4o and o3 conversations going and those models were factually correct about a number of different subjects.

Asking GPT-5 about the same things results in wrong answers even though its training data is newer. And it won't look things up to correct itself unless I manually switch to the thinking variant.

This is worse. I cancelled my subscription.



Synthetic data. Get used to it, it’s in vogue.


Example?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: