JunkScience.com has been right for 17.5 years!
A key passage:
Here’s the media release:
Most published medical research is false; Here’s how to improve
In 2005, in a landmark paper viewed well over a million times, John Ioannidis explained in PLOS Medicine why most published research findings are false. To coincide with PLOS Medicine’s 10th anniversary he responds to the challenge of this situation by suggesting how the research enterprise could be improved.
Research, including medical research, is subject to a range of biases which mean that misleading or useless work is sometimes pursued and published while work of value is ignored. The risks and rewards of academic careers, the structures and habits of peer reviewed journals, and the way universities and research institutions are set up and governed all have profound effects on what research scientists undertake, how they choose to do it and, ultimately, how patients are treated. Perverse incentives can lead scientists to waste time producing and publishing results which are wrong or useless. Understanding these incentives and altering them provides a potential way for drastically re-shaping research to improve in medical knowledge.
In a provocative and personal essay, designed to spur readers into thinking about how research careers could be redesigned in order to encourage better work, Ioannidis suggests practical ways in which our current situation could be improved. He describes how successful practices from some branches of science could be distributed to others which have performed badly, and suggests ways in which academic structures could provide greater benefits from the work of researchers, administrators, publishers and the research funding which supports them all.
“The achievements of science are amazing yet the majority of research effort is currently wasted,” asserts Ioannidis. He calls for testing interventions to improve the structure of scientific research, and doing so with the rigor normally reserved for testing drugs or hypotheses.
###
Biostatistics is rarely studied by primary researchers. I can recall in my graduate biostatistics class, only 2 of the 30 or so students being physicians. Most were from the allied health sciences. Just in the design of studies and the inappropriate use of statistics, a healthy percentage of medical studies published can be tossed out. When fraud is taken into account . . ..
Statistics can corrupt, too. Just look at the state of modern physics. Many people view it as an easy way out of tough thinking problems, rather than an aid to thinking.
It would be more useful if students were introduced to Jaynes’s ideas of plausible reasoning before they committed themselves to the technicalities of statistics. Jaynes’s writing is easy to follow and he covers a lot of problems in statistics and science in general:
http://web.archive.org/web/20001201214000/http://bayes.wustl.edu/etj/prob.html
I think everybody should spend a few minutes reading at least the preamble and the first chapter — they are quite entertaining.
Yes, like outfitting the comet lander with a harpoon and ice screws.
Expectation bias is not always bad, though. Many useful research activities are guided by vague clues or crazy ideas. For example, a doctor comes to me with a protein he crystallized from human urine and asks me to help find what might be coding for it. I rummage in the genome, read papers, talk to people, and eventually find that it is a splice variant of another known protein.
I had a huge expectation bias coming in: I was told about the protein’s crystal structure, its molecular weight, approximate amino acid composition, and where it is expressed. Those clues were not sufficient to simply fetch the expected result (the expectation was that an unassigned gene for that protein must be hiding somewhere), and it took months of search, additional clue-gathering (growing more biased), and some incredible luck to nail its location. But without the initial clues I was given at the start, I wouldn’t even dare to start.
Research that is goal-oriented is bound to be tainted by expectation bias.
All of the sciences need to emphasize the need for adding Statistics to the core requirements. Most times a simple average or standard deviation is not enough, but few graduates are competent enough to even handle a simple linear regression properly, let alone apply it without expecting it to function as a forecasting tool.
Research that is goal-oriented is bound to be wasteful. I’d call it a success if only 85% of all research was fruitless. But if 85% of research publications are not credible, that is a problem that needs correction.
Reblogged this on This Got My Attention and commented:
85% wasted? Unbelievable yet it’s from top notch researchers at Stanford. Check out the paper.