Does this really contradict that 10000-hour rule? (Well, apart from the fact that 10000 hours may well be an arbitrary number, and not as magical as Gladwell made it out to be)
From what I remember from reading Outliers, the claim about the 10000 hours was formulated more precisely than is posed here: it only said that above a certain level of innate talent (IIRC the particular level wasn't made a big deal of), differences in talent were less significant than differences in practice hours.
That's a much weaker claim, but a more credible one: if you're already blessed with eg a high IQ, then it pays off to work hard on your homework, and you can overtake the super-brilliant slacker in your class. But if you didn't have a very high IQ to begin with, the rule doesn't promise you anything.
The data in the article just says there's a lot of other stuff that determines the variance in performance - probably that includes "talent". It doesn't try to find if such a "talent threshold" above which talent is less important exists.
> Does this really contradict that 10000-hour rule?
No, it contradicts the premise that there could be such a number with the meaning attached to it. It's not about the value the number has, the issue is the number itself -- the idea that the thing under discussion could be assigned a specific value.
But the deeper problem is that it assigns a quantity to something no one understands well enough to quantify. Scientists avoid attaching a number to something until there's an explanation, a theory, that makes the number both compelling and testable (i.e. falsifiable).
>Scientists avoid attaching a number to something until there's an explanation, a theory, that makes the number both compelling and testable (i.e. falsifiable). //
Hubble seemingly ignored the outliers and drew a straight line through his data to create his eponymous "constant". Making a mark to work from isn't really unscientific it's just loose hypothesising - http://www.pnas.org/content/15/3/168.full.pdf+html.
Surely the fact that this meta-analysis has falsified the former claim [hypothesis] shows that it was scientific [at least under a Popperian formulation].
> Hubble seemingly ignored the outliers and drew a straight line through his data to create his eponymous "constant".
That's a perfectly reasonable use of statistics in reducing observations. The same method was used extensively in the recent hunt for the Higgs Boson, until the uncertainties in the process fell below 5 sigma, at which point a discovery was announced.
> Making a mark to work from isn't really unscientific it's just loose hypothesising ...
If it's not either derived from empirical observation or a reasonable extrapolation from established theory, it's ipso facto unscientific.
> Surely the fact that this meta-analysis has falsified the former claim [hypothesis] shows that it was scientific ...
No, it only shows that one unscientific claim (based on no theory and no evidence) can appear to unseat another. And it's not a "falsification", because the original claim is unfalsifiable on the ground that it's not based on a testable theory.
Group A says, "It takes 10,000 hours ..." without any basis. Group B says, "Utter nonsense." This happens in astrology all the time. Does that mean astrology is science? The difference between astrology and science is that scientists won't make a prediction without an empirical basis and a theory about why it's so.
Based on your formulation it seems all new "scientific" [null] hypotheses are "ipso facto unscientific"?
For example c being constant is an axiomatic part of relativity. Ergo to you it seems, as this is not empirical nor an extrapolation but a new hypothesis that when postulated contradicts established science, this suggestion - and presumably the ensuing formulation - was, um, unscientific?
Now I'm happy to go with that, call it a philosophical treatise and recognise the axiomatic nature of relativity but at this point I think your definition of science is too tight; theoretical physics to me is a part of science. Indeed I'd say wild hypothesising can be (and is called) science depending really on what you do with those hypotheses.
I thought that the original study was just showing a correlation. That is, that experts typically had 10,000 hours of practice into their field of expertise. People have claimed causation since then (i.e., 10,000 hours guarantees expertise). The weaker claim that 10,000 hours of practice is necessary (but not sufficient) in order to become an expert seems reasonable to me.
Thanks, yes, that's the other important point: you can't turn it around and expect a guarantee of success. I believe the original study didn't even claim any magical threshold of hours either - Gladwell probably just introduced that as a literary device: "10000 hours", and the added suggestion of some mystical threshold, make for much better storytelling than "oh you know, on the order of tens of thousands of hours" :)
And then if you add the other caveat too, it becomes too underwhelming to make for inspiring bedtime reading: "on the order of tens of thousands of hours, provided that you were quite great to begin with"!
Yes, too underwhelming to catch on as much as Gladwell's interpretation has. However, it actually makes an important point: even if you're talented, it requires practice to really become an expert.
From what I remember from reading Outliers, the claim about the 10000 hours was formulated more precisely than is posed here: it only said that above a certain level of innate talent (IIRC the particular level wasn't made a big deal of), differences in talent were less significant than differences in practice hours.
That's a much weaker claim, but a more credible one: if you're already blessed with eg a high IQ, then it pays off to work hard on your homework, and you can overtake the super-brilliant slacker in your class. But if you didn't have a very high IQ to begin with, the rule doesn't promise you anything.
The data in the article just says there's a lot of other stuff that determines the variance in performance - probably that includes "talent". It doesn't try to find if such a "talent threshold" above which talent is less important exists.