This appeared late last week.
How science goes wrong
Scientific research has changed the world. Now it needs to change itself
Oct 19th 2013 |From the print edition
A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better.
But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.
Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis (see article). A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic. Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
What a load of rubbish
Even when flawed research does not put people’s lives at risk—and much of it is too far from the market to do so—it squanders money and the efforts of some of the world’s best minds. The opportunity costs of stymied progress are hard to quantify, but they are likely to be vast. And they could be rising.
One reason is the competitiveness of science. In the 1950s, when modern academic research took shape after its successes in the second world war, it was still a rarefied pastime. The entire club of scientists numbered a few hundred thousand. As their ranks have swelled, to 6m-7m active researchers on the latest reckoning, scientists have lost their taste for self-policing and quality control. The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat. Full professors in America earned on average $135,000 in 2012—more than judges did. Every year six freshly minted PhDs vie for every academic post. Nowadays verification (the replication of other people’s results) does little to advance a researcher’s career. And without verification, dubious findings live on to mislead.
----- Lots Omitted
Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.
What does this mean for those interested in e-Health. To me the implication of all this means a number of things
First we need assessments of e-Health and publications that are done with a view to being replicable and transparent.
Second we need the end points being examined in the studies to be clinical outcome focussed and to be demonstrably achievable in the real world. The difference between finding an effect with a bespoke hand crafted solution in just one hospital and seeing an improvement in population health based measures related to Health IT is just vast.
Third we need to make sure, as the article points out, that failures are documented to we can be sure lessons learnt are being properly documented and understood.
Fourth we need to be sure that whatever is measured in a study is genuinely clinically meaningful.
Last for me we really do need to see publication when it is likely to make a difference rather than just because the publish or perish paradigm is active. I would much rather read 20 quality meaningful publications a year than the zillion abstracts that seem to always be floating around and which make it very hard so see to wood for the trees.
The scientific endeavour has made a great contribution to the world but if don’t focus on quality (and replicability) rather than quantity we may do ourselves enormous harm.
The pressures on Australian Universities at present are pretty extreme and it is important these pressures do not lead to poor quality rushed research.
David.
I think the piece is part of a move to challenging the publish or perish mentality - at the same time, the NH&MRC needs more technical advice from health informaticians about the eHealth or mHealth projects they fund and their clinical significance. A change I would introduce at NH&MRC level is to recruit an informatician when assessing these type of applications. Too many eHealth manuscripts report positive results based on what I consider to be weak science or "low hanging fruit".
ReplyDeleteOne of the great promises of eHealth is that it automates the evaluation of clinical performance by allowing every recorded clinical action to become a data point in a published report. In this sense, eHealth provides a path to a new way of looking at health care that skips over the need for traditional publications. On this, it is interesting to see major medical publishers seeing the writing on the wall and moving into the "order sets" business.
ReplyDeleteHave a look at this disturbing landmark paper in PLOS Medicine that must make every researcher think twice:
ReplyDeleteWhy Most Published Research Findings Are False
http://tiny.cc/dt7k6w