HN2new | past | comments | ask | show | jobs | submitlogin

> the tools to detect will be exactly as good

[citation needed]



I was curious about this recently, so I built a very rudimentary neural net trained on GPT generated text messages, and human generated text messages. I was able to get a surprisingly good detection accuracy with just under 1k lines from each sample set. I'm not sure it's as apocalyptic as people think.


I guess time will really tell.

We might as well hit the problem Google Translate hit, where the training set started to contain more and more data created from GT itself. Similar thing may happen with your NN. At some point there may be so much AI-generated content (by different AIs) that it may be difficult to compose a trustworthy training set.


I suppose a solution to this would be something like pre-war iron. We would have to rely on archived sources, like Wikipedia past edits, that come from before ChatGPT existed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: