Hacker News .hnnew | past | comments | ask | show | jobs | submit | foobaruser's commentslogin

The article reminded me of a paper that described a NN that was able to learn with just a few examples. However I'm not able to find that paper on my notes.

Does anyone remember which paper is it?



The paper is here: http://cims.nyu.edu/~brenden/LakeEtAl2015Science.pdf

This isn't deep learning (or a neural network at all). However, it is an extremely interesting approach.

Most of the previous "low data" deep learning approaches I've seen are broadly based around the approaches seen in "Zero-Shot Learning Through Cross-Modal Transfer"[1]

That's not really low data in the sense that it needs lots of data for initial training, but then is able to learn new things with very few examples.

[1] http://papers.nips.cc/paper/5027-zero-shot-learning-through-...


You might also be interested in things like the Anglican language from Oxford. http://www.robots.ox.ac.uk/~fwood/anglican/


We do not presume to come to this Thine output trusting in our own correctness, but in Thy manifold and great Processors. Print, we beseech Thee, the content of Thy variable X, according to Thy promises made unto mankind through Thy servant Alan, in whose name we ask. Amen.


Yes indeed. Church[1] too.

The thing is - I've never seen anything quite as compelling as the Science paper from the probabilistic programming community.

[1] https://probmods.org/


Off the top of my head and a quick search online, I can't think of what you're referring to, but a search for papers on data augmentation methods might find it. It would be helpful to know what context you're referring to - visual data, text, etc.

Edit: Unless you're referring to transfer learning to domains with limited training data?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: