Hacker News .hnnew | past | comments | ask | show | jobs | submit | haon99's commentslogin

StudyShuffle - http://www.StudyShuffle.com/

Contact: noah[at]noahlitvin[dot]com


I'm not against affirmative action on principle, but does anyone else find it a little strange that the need-based grants are only available to female programmers? They could bias the admission process in favor of female applicants (or other underrepresented demographics) without conflating financial issues, right? Or there aren't even enough female applicants without this incentive to allow for that?


We take this approach specifically because we don't do any affirmative action or bias our admissions process in favor of specific groups. These grants are our hack for how to have a more gender-balanced environment without negatively impacting men or lowering the bar for women. The grants work because they increase the pool of qualified women who are able to do Hacker School.

Also, to clarify, the grants are funded by other companies, not Hacker School (the most recent sponsors are Dropbox, Etsy, Jane Street, Tapad, and Tumblr: https://www.hackerschool.com/blog/26-dropbox-etsy-jane-stree...).


> "We take this approach specifically because we don't do any affirmative action or bias our admissions process in favor of specific groups."

You're giving grants to women specifically because they are women. That's a bias. You're favoring a specific group. What about the men who haven't applied because they can't afford it and aren't women? You should be giving grants to underprivileged people regardless of sex. But that wouldn't get you or the companies sponsoring the grants nice press release, now would it?


To clarify:

1) We judge male and female applicants on exactly the same scale. We in no way lower the bar for women or any other group. Nor do we give women or any other group a leg up when making admissions decisions.

2) We offer grants to women who need financial assistance. By offering these grants, we do bias the pool of applicants, because we (hopefully) increase the number of women who choose to apply.

3) We have put a tremendous amount of time, energy, thought, and effort over the past few years into making Hacker School free for everyone, and we continue to work very hard to continue making this possible. We effectively give all our students a $10-15k scholarship by not charging any tuition.

Our great crime appears to be the fact that we have not yet found a way to additionally give money to everyone who can't afford to come to Hacker School.


I really like this approach to the gender imbalance problem: grow the pool, don't lower the bar. I think this is the way to go, and I hope that more tech organizations will think of it this way.


Thank you for the details. I appreciate the approach you are taking.


I don't have a problem with it - in the past, at least some (if not all) are coming directly from Etsy (http://www.etsy.com/hacker-grants). They want to attract more women into their business because they see benefits of women being well represented in their workplace.


> They could bias the admission process in favor of female applicants (or other underrepresented demographics) without conflating financial issues, right?

They could, but would that honestly be better? Would you want them to lower standards for women or reject highly qualified (male) applicants just to make room for more female applicants in the accepted pool?

It's not an ideal solution, but considering the school is tuition-free for everybody, regardless of gender, I don't think this approach is all that bad.


I started taking semi-regular walks with friends around town without our phones. The take away has definitely been akin to 'pros' list here, but dodged all the 'cons'.

Shameless plug - We actually set up a website to help facilitate these walks, hoping to get others involved: http://www.OffTheGridOnTheGrid.com/


Not by me. All comments/criticism appreciated. Original title read "Show HN: LaughMatch (MVP built in ~3 days)"


Not to be a jerk, but "[Copernicus] was forced into it, because it was the only way to make the numbers come out right" just isn't correct.

Tycho Brahe's measurements showed that Ptolemy's model was more accurate than Copernicus's. The point still stands that Copernicus transcended his time, but he was really riding on the coattails of Ptolemy's genius. Ptolemy never gets enough credit...


Really, Copernicus' accomplishment was that his model worked very well yet left out a huge heap of epicycles. Heliocentric = fewer epicycles. That was Copernicus' contribution.

This pales in comparison to Kepler's "Equal areas in equal time." That observation is a very strong hint towards Newtonian mechanics and Calculus.


Actually, strictly speaking, Copernicus's system has more epicycles that Ptolemy's (or at least no fewer, depending on how you work out the details of it). Copernicus's big win is getting rid of the equant, easily the most mathematically ugly thing in the Ptolemaic system.

The equant has a planet moving in a circle around one point, but moving uniformly from the perspective of a different point--essentially a way to avoid the restriction to uniform circular motion. It's a cheap hack, and it's the flaw that drove Copernicus away from Ptolemy in the first place. Or anyway, it looks like a cheap hack, until Kepler comes along and realizes that it's hinting at, as you said, "equal areas in equal time".

So I guess what I mean to say is: yeah, Kepler's pretty awesome.

(Sorry, don't mind me. I just spent way too much time reading Ptolemy in college, and I have no reason to believe there will ever be another opportunity to apply that bit of my education.)


This makes me think of over-fitting in machine learning algorithms. The epicycles fit the data very well, but added a lot of parameters to the model. It makes me wonder if there is a bias/variance trade off for scientific models, but I don't know how to express the connection formally.

In machine learning algorithms, we hold out a dataset to test for over-fitting. We don't exactly have a spare universe to test our scientific models for over-fitting, but maybe if there were a second planetary system at the time to test against it would have been clear sooner that the "epicycles" model fit only this solar system from the vantage point of earth? Maybe you could "train" the model on some heavenly bodies, then test on others?

I'm pretty sure I'm making a fool of myself at this point and missing something obvious, and I'm hoping one or more of you will point that obvious thing out to me.


Kepler's ellipses fit the data with an even simpler model. Newton added an underlying model which could be generalized to hypothetical bodies. In other words, you could plan something like an Apollo mission with it. I doubt you could do something like that with the Copernican, Ptolemaic, or even the Keplerian model. Newton's model gave you enough insight to hack. And not just surface hacks, but deep hacks. Everything before was merely descriptive.

Newton's model also showed convergence. The way the planets moved became connected with the way cannonballs behaved. Mechanics could also subsume models of buildings and machines. Engineering and architecture were unified by Newtonian Mechanics.

It's not just a matter fitting. It's a matter of transcending current models. (Another reason to study different programming languages/paradigms.)


I think you've hit on a weakness of the whole Machine Learning paradigm. I might get this wrong, but I believe every machine learning algorithm necessarily introduces some bias into selecting which possibilities to consider, and without some kind of bias, learning is impossible.

But once you've chosen how you will bias your model, you are only going to search for solutions in the space defined by that bias. So, figuring out the parameters of the ellipses describing the movement of heavenly bodies, but not questioning whether ellipses are a good choice to begin with. There is also feature selection, how you decide which aspects of reality (or measurements of reality, actually) are relevant to the learning problem. (There are feature selection techniques, but that presumes you already have a finite set of candidate features and then determine which ones have the most value.)

It seems that, perhaps, this kind of paradigm busting discovery is out of reach of current machine learning methods, and that the kinds of decisions about what to model and how to bias your model is where humans add value to the process.

This is all philosophical bullshit at this point, but I remain curious about the relationship between learning algorithms and scientific discovery. If anyone is still reading, are there any good books on this topic?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: