It's probably very naive, but that makes me wonder whether these neural nets are trained to recognize noise or meaningless images as such. If we train a system to tell us what an image represents, the system will do its best to classify it in one of the existing categories. But having a low confidence in what an image represents it's not the same as having high confidence in the fact that it doesn't represent anything. So maybe we should train the networks to give negative answers, like "I'm totally confident that this image is just noise".