The article reminded me of a paper that described a NN that was able to learn with just a few examples. However I'm not able to find that paper on my notes.
This isn't deep learning (or a neural network at all). However, it is an extremely interesting approach.
Most of the previous "low data" deep learning approaches I've seen are broadly based around the approaches seen in "Zero-Shot Learning Through Cross-Modal Transfer"[1]
That's not really low data in the sense that it needs lots of data for initial training, but then is able to learn new things with very few examples.
We do not presume to come to this Thine output trusting in our own correctness, but in Thy manifold and great Processors. Print, we beseech Thee, the content of Thy variable X, according to Thy promises made unto mankind through Thy servant Alan, in whose name we ask. Amen.
Off the top of my head and a quick search online, I can't think of what you're referring to, but a search for papers on data augmentation methods might find it. It would be helpful to know what context you're referring to - visual data, text, etc.
Edit: Unless you're referring to transfer learning to domains with limited training data?
Does anyone remember which paper is it?