Hacker News .hnnew | past | comments | ask | show | jobs | submit | alceufc's commentslogin

There was a challenge at stack overflow to encode an image in a tweet (140 characters at the time):

https://stackoverflow.com/questions/891643/twitter-image-enc...

Some of the solutions took a similar approach that used geometric primitives.


Most journals require that the authors sign an agreement in which the publisher retains the copyright of the paper (e.g. [1]).

Although a few publishers allow the authors to provide a pdf of the paper in their personal home-page, they do allow it to be uploaded to an open access platform.

[1] https://www.elsevier.com/about/company-information/policies/...


I also think that Anaconda is great. However, I hope that in the future we could install numpy, matplotlib, jupyter, etc. just using pip.


I did that earlier today:

pip install jupyter

pip install numpy

pip install scipy

pip install scikit-learn

pip install matplotlib

The only problem I had was with OpenCV, which requires manual make installation if you want the contrib package. The other problem was when trying to install scikit-learn, it requires manual pip installation of scipy.


I would always be worried with installing numpy/scipy via pip that I wouldn't be linking to BLAS/LAPACK correctly.


What do you mean? I have always installed via pip with zero errors on all three platforms.


The worry is not that it will throw an error, it's that it will silently link against lower-performance math libraries and make your code inexplicably slow.


The reason you can't is because of the C libraries levaaged must be installed prior to install numpy and scipy.

For example, you can't get through PyYaml unless python-dev is installed on Ubuntu. I am not sure if wheel would fix it but I don't think so.


Wheel is a binary distribution, so files would already be compiled and therefore python-dev would not be needed anymore.


Is wheel not default? Or play it elementary :-) how would I avoid this issue in the future?


I did not write anything worthy to send to PIP so don't know exactly but looks like it is up to the developers[1]

Anyway I just noticed that PyPI only supports binary packages for Windows and Mac OS X. Although, you could still generate wheels of packages that you use by using something like this:

    pip wheel -r requirements.txt
You can then install them with pip install <file> or (unfortunately I forgot the option, perhaps it was -i) you can use an option to point to a directory containing wheels and pip install to install the main package. It should use all dependencies in that directory as well.

[1] http://pythonwheels.com/


the issue with Wheels on Linux, and this is very much a Linux problem, is that the 'pre built' nature of libraries in wheels doesnt play nice with the raw chaos of Linux's package management + distro + kernel ecosystem. I raised the question of FreeBSD and Solaris based wheels at a PyCon when I was in a face to face discussion with someone more knowledgeable, and the answer was 'in theory that should work like Windows & OSX, no one has done the hard work yet.'

So yeah Linux is not the most friendly environment for Python Wheels.


conda supports pip installation within the context of an environment (much more gracefully, in fact, than does virtualenv). So you're not giving anything up by using conda in this regard.

I don't see the desire to have pip as the baseline. For me, the conda packaging is much more informative and placing everything you need for multiplatform support into an /info directory with a meta.yaml is a lot more effective than going through the steps of PyPI. conda also makes uploading and hosting on anaconda.org extremely easy.

Normally there is the whole "gee, I don't want to learn another package manager" -- but conda / anaconda.org is extremely worth it. It really is a major engineering step forward from the existing package deployment strategies in Python.

I even configure my travis.yml CI scripts to download Miniconda, create a conda environment from a requiremenets.txt, and then build and test my code via conda on the contiguous integration VM itself.

The only worry is how strongly tied conda and anaconda.org are to the future of Continuum. Given how much Continuum speaks of open-source work, one would hope that these projects essentially live independently (or that forks of them would) but you never know. I do admit that is a major downside.


I like Anaconda (and I recommend it as the easiest installation for data sci), but on OS X it is easy to install Python and relevant numerical packages with Homebrew and pip.


Currently, pip install of numpy (and scipy) does not pick up MKL, which you would want if you do lots of linear algebra and FFTs.


Flickr was -- and maybe still is -- very useful for the computer vision research community.


This is old news but we put together a 100M item dataset from Flickr as well, all Creative Commons licensed, including lots of metadata from users as well as some pre-computed features. It's called the YFCC100M.

http://yahoolabs.tumblr.com/post/89783581601/one-hundred-mil...


Is a Creative Commons license for research only or can it be used for commercial purposes if you attribute the original author?


Flickr is great for finding Creative Commons-licensed images as well.


How do they use it? Does it have a good programmatic access? Because UX-side, I'm surprised they still exist. I don't know of a single photo-related web site that has worse UI and is more annoying to use than Flickr.


This is extremely biased. A lot of people, myself included, find it easy and pleasant to use. Honestly, I'm not aware of a single alternative with a better UI/UX -- 500px maybe?

As for programmatic access -- Flickr has a good API interface.


I'm not aware of a single alternative with a better UI/UX

I don't use it anymore, but SmugMug.


SmugMug is great! It's a little bit different from Flickr though -- SmugMug is more of a personal photo hosting/portfolio website, while Flickr is more about photographers community. I believe the social aspects of Flickr are much more important than it's actual photo storage capabilities!


It's funny, since Flickr was at the beginning the best of them. Google's PicasaWeb was comparatively bad.

Yahoo really let themselves get leapfrogged on that one.


My trouble with the UX is that it is slow, slow, slow. I live in Frontier country and I have DSL, but there are many photo web sites that are faster than flickr.

I hardly upload anything to flickr anymore because the interface for that is so slow.


Amazing...did not know that. Thanks.


"Mining of Massive Datasets" by Leskovec, Rajaraman and Ullman is very good.

Although the post gives a link to the Amazon page of the book, PDFs of the chapters are free to download at the official book web site[1].

[1] http://www.mmds.org/


My first impression was the same as Mithaldu, but then I tried one of the other games and it was more responsive.

I think that the NES emulator is bad advertising for your (very nice) product.


A challenge would be that coffee trees have very deep roots [1].

[1] http://www.fao.org/docrep/006/ad219e/AD219E05.htm


So we build deeper farm levels.

Even more conventional greenhouses, however, should help somewhat. I reckon the point of the parent commenter is less about the vertical nature of such farms and more about the climate controllability of such farms.


I have the impression that Pepsi is much more popular in US than in Brazil.


"However, yet another recent theory proposes that grasses competing for water and nutrients - limited resources in the Namib desert - create the circles, explaining why they never overlap."

Maybe this model could be used to analyze this hypothesis.


Nice, but I think it is not real, just marketing for the new Planet of the Apes movies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: