Unfortunately, AG Bethge has not (and most likely will not) released the original code, so this is very nice to have! The results from karpathy's implementation definitely feel a little "off" but it is very good nonetheless. Also see: Kai Sheng Tai's implementation - https://github.com/kaishengtai/neuralart
It seems like the original implementation still has a few undocumented tricks up its sleeve for improving accuracy that have yet to be figured out.
Whenever I see code for neural networks in caffe/torch/theano, it bothers me a lot that its not easy to get them up and running on windows. I can't believe MS is missing this boat. I believe this field is exploding right now and the only one seem to be aligned is NVIDIA. MS has sponsored development of nodejs for windows before. I'm hoping they will do something similar for these frameworks soon.
It's not at the level of being as fast as cuDNN nor integrated in major deep learning libraries but at least we can expect some competition in the coming years.
That doesn't sound like a worthwhile approach to me.
Xeon are server CPUs. So that means that whoever bothers buying Xeons with scientific computing in mind may as well go all the way and buy nVidia GPUs instead.
So instead of making that framework available to all Intel Haswell and newer families and try to persuade the customer from having to buy nVidia GPUs, they cut themselves short.
The Xeon Phi is not a Xeon. It's a co-processor. https://www-ssl.intel.com/content/www/us/en/processors/xeon/... The main advantage is that you can run almost-normal x86 code on it. Each core gets its own cache, so it's not really the same as programming for GPUs.
Theano should work on Windows. And the community is very friendly. So, in case of install trouble, just ask them. Or better: help to make it simpler. (If it is not a principle Windows thing, like missing package manager or so.)
Of course, many Theano-based scripts you will find somewhere are probably only tested on a very specialized environment, and this might expect some Unix-like environment. But this is something which you cannot really solve, other than contribute to the script and fix it.
About the related Nvidia CUDA discussion: OpenCL support by Theano is in the works. Not sure how far it is.
I don't know - the second Picasso/Brad Pitt would make for a good instagram filter.
I think if there was some level of semantic tagging and weighting of different aspects of an artist's technique it might do better - identify sky, water, buildings, faces, plants, etc (none of which is particularly beyond the capability of current image classifiers), then it might produce better results. I could easily imagine this turning into a 'Rembrandtize Me' selfie-filtering app.
Then it's really only a matter of time before the extension of these techniques moves to 'make my vacation photos look like they were taken by Ansel Adams', then 'show me Star Wars as if Alfred Hitchcock had directed it', or 'play me Smells Like Teen Spirit as if it had been sung by Elvis'. Neural Remixing.
Author here. The examples with the golden gate bridge were generated using an earlier version of the code; later additions (L-BFGS vs SGD, TV regularization) tend to clean up the cross-hatch pattern quite a bit.
https://hackernews.hn/item?id=10141516
https://www.youtube.com/watch?v=-R9bJGNHltQ&list=PLujxSBD-JX...
It's nice to see an(other) implementation of this paper. I looked through the references of the paper and didn't find any source links.