There are fields where well over 30% of published papers are contradicted by the next paper on the same subject. Getting into why this happens with empirical studies of large data-sets is complected, but boils down to looking at enough things until signal is indistinguishable from noise. For a simpler example assume this was actually done and they published their findings: http://xkcd.com/882/ Now assume other than the P>.05 there methods where impeccable what information have you gained?
A) The actually probability that green Jelly beans are actually linked to ACNE is impossible to tell. (http://en.wikipedia.org/wiki/Bayes%27_theorem) You might shift your expectations, but if you do the probability's the shift in expectations is tiny, because there is so much noise.
Now fill a field with that junk and suddenly reading a paper provides vary little information which slows everything down. You can discuss such things but it's about as meaningful as talking about who won the world cup. http://xkcd.com/904/ Worse yet, people rarely publish false results which means even reading a well done study is only meaningful if you can find some other logic to back it up. At which point it might be worth investigating, but the reason it's worth investigating your expectations and has next nothing to do with the paper you just read, and even if you find some deep truth the glory goes to the guy who was publishing noise.
PS: It get's worse. Because, contradicting a study is worth publishing and publishing is a numbers game, you have many people who simply reproduce research to pad their numbers and cut down on clutter. But, if your tolerances are loose enough say P >.05 and you have enough random crap in the hopper every 400 completely random papers can service two rounds of this get a lot of attention only to be discredited.
Edit: This is also why it takes a huge body of background reading and a deep understanding of statistics before you have the context to meaningfully discus a recent paper with a scientist.
A) The actually probability that green Jelly beans are actually linked to ACNE is impossible to tell. (http://en.wikipedia.org/wiki/Bayes%27_theorem) You might shift your expectations, but if you do the probability's the shift in expectations is tiny, because there is so much noise.
Now fill a field with that junk and suddenly reading a paper provides vary little information which slows everything down. You can discuss such things but it's about as meaningful as talking about who won the world cup. http://xkcd.com/904/ Worse yet, people rarely publish false results which means even reading a well done study is only meaningful if you can find some other logic to back it up. At which point it might be worth investigating, but the reason it's worth investigating your expectations and has next nothing to do with the paper you just read, and even if you find some deep truth the glory goes to the guy who was publishing noise.
PS: It get's worse. Because, contradicting a study is worth publishing and publishing is a numbers game, you have many people who simply reproduce research to pad their numbers and cut down on clutter. But, if your tolerances are loose enough say P >.05 and you have enough random crap in the hopper every 400 completely random papers can service two rounds of this get a lot of attention only to be discredited.
Edit: This is also why it takes a huge body of background reading and a deep understanding of statistics before you have the context to meaningfully discus a recent paper with a scientist.