Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.

Just to avoid yet another case of hallucinations outputs getting misinterpreted.



Right, thank you for the suggestion. Just added a paragraph to the original blog post.


Your added paragraph appears to suggest the opposite, that this was an LLM response. Was the "leaked data" a response from an LLM directly?


Yes apparently which makes this report pretty flimsy.


Upthread, OpenAI's security team confirms it's a false report; it's a variant of the empty-prompt hallucination.


Incredible that so many people still don't understand what an LLM is. Especially ones that you would expect to grasp it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: