Your quote is thought-provoking, but I do not quite follow the rest of your argumentation:
> A lot of research is supposed to be new, uncertain, high-risk, high-reward. So it should fail often.
Setting aside the question of whether or not science ought to be the way you describe it, what do you mean by research "failing"?
IMO, there are two levels on which research can fail. The first (and I get the impression that you are describing this type) is the failure of some new theory to be corroborated by the facts, or the failure of a new approach to fulfill its promises, or a later experiment contradicting an earlier one. But this is part and parcel of the scientific method. In fact, this leads to an advancement of science, as we then know which explanation is not correct, or which observations need more detailed scrutiny. And so, such a failure of research is actually a success of science.
However, this scientific process is based on scientists exchanging their findings freely so that they can cross-check each other. Thus, the second and much greater failing of research would be to fail to communicate itself. And that is exactly what (appears) to be happening. If scientists aren't reading what their colleagues publish, that is not a failure of research - the research itself might be fantastic, but if nobody reads it, who is going to know? This is not a failure of research, this is a failure of science. And that is a much more serious issue altogether.
>whether or not science ought to be the way you describe it,
I'm assuming a charitable readership - but its frequently stated, and I agree, that industry is for high value things we know will work, science/research/academia is geared for less certain / more speculative projects difficult to commercialise yet.
> If scientists aren't reading what their colleagues publish, that is not a failure of research
Well, maybe?
Or maybe the paper authors hoped the results would be amazing, but they were only mediocre [research issue]; but they decided they might as well publish after getting the results (which is good), and some people skimmed the abstract; but it wasn't as interesting to others as the authors hoped (diverse perspectives; good); or the whole field went a different direction; or the specific competition found a better technique/result in the interim [research issue] and got all the love.
That's all just how research goes - doesn't mean anyone is failing to communicate necessarily.
> This is not a failure of research, this is a failure of science.
Maybe; maybe not.
I agree there's lots of problems in science, but based on my limited experience, I'd expect and think its OK for lots of stuff to be unread; lots of dead ends.
Lots of startups should fail too, that's not a bad thing; its like there's simply a high bayes error.
IMO real failure is things like people making up data, or doing research they know is bunk (maybe they hacked their p-values or left out data, or something) or continuing research they discovered is useless but their Adviser is politically wedded to etc.
Yes we should totally incentivize publication of negative results / failed experiments. If anything, we need more publication of negative results, that are likely to not be cited ever for the most part. It's a big problem that many failed experiments go un-noticed because researchers are dis-incentivized to publish negative results.
This brings the interesting question: how many failed experiments are being "replicated" just because the past failed attempts are not published? Said differently, how many researchers are wasting time on something that is bound to fail, like all the researchers that happen to have tried the same idea in the past failed, but didn't publish, because who wants to publish failure?
> A lot of research is supposed to be new, uncertain, high-risk, high-reward. So it should fail often.
Except that has nothing to do with papers not being read. Papers that detail failure are usually never written, and if they are, they're not published. But in any event, reports of failure should be read almost as much as reports of success.
In addition, bad research and unread papers are two completely separate problems, that may not be related in any way whatsoever.
Papers are not read but because researchers must churn out a lot of papers even for results that are partial or not very interesting on their own, so we are flooded with what basically amounts to noise. It obviously makes sense that a lot of research would turn out to be not very interesting, but that shouldn't result in papers being written and published. I guess that even those who publish those papers would rather have more time to bring ideas to the point where they become interesting, or conclude that an approach has failed, but the system doesn't work that way, and that's a failure of the system.
While I agree that reports of failure can be important, in many cases experimental research is trying to develop a protocol to perform a measurement. A protocol that does not work is simply not as interesting as a protocol that does work.
For example, I am sure there are many plausible gene-editing protocols, but none are as robust as CRISPR. Of course the papers that describe CRISPR are going to be cited more, since that protocol is going to be used in many other studies.
Exactly. Take the same thing, but applied to startup culture. It's true that 50% (actually much more than that) of startups fail spectacularly. But I don't see anyone saying we should stop doing startups.
A lot of research is supposed to be new, uncertain, high-risk, high-reward. So it should fail often.
[A lot is also to be meticulous fact-checking, building certainty to later build new breakthroughs on.]
IMO, there is a real problem with people doing bad research they know is bad, but I wouldn't say that the 50% unread figure is the thing to focus on.