HN2new | past | comments | ask | show | jobs | submitlogin

The difference is that humans know they are seeing noise, they don't claim 99% confidence in their image, but rather a very low confidence.


It's probably very naive, but that makes me wonder whether these neural nets are trained to recognize noise or meaningless images as such. If we train a system to tell us what an image represents, the system will do its best to classify it in one of the existing categories. But having a low confidence in what an image represents it's not the same as having high confidence in the fact that it doesn't represent anything. So maybe we should train the networks to give negative answers, like "I'm totally confident that this image is just noise".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: