Are Biomedical Research Practices Making Us Sicker?
DrBudWiederman, MD, MA, Evidence eMended Editor, Grand Rounds
May is a month with 5 Tuesdays, again unleashing me from my usual commentary of
AAP Grand Rounds articles. This time, I chose to do a book report on a recent publication that is highly
Harris R. (2017). Rigor mortis: How sloppy science creates worthless cures, crushes hope, and wastes
billions. Philadelphia, PA: Basic Books.
Of my 400+ posts in Evidence eMended, this one has taken the most time. Worse, I even
had to put aside my nighttime leisure reading, currently a Ross Macdonald novel, so
I could finish reading and analyzing Harris's short book.
Harris is a science correspondent for NPR, and he's come out with a pretty provocative book
challenging today's climate of biomedical research. The book's title and subtitle
are a little over the top, but then so is my title for this posting! Mainly speaking
about preclinical (in vitro and animal studies) biomedical research, he paints a grim view of the problems with
how such studies are conducted. Although at times I thought he glossed over some details
to make his point more emphatically, by and large, he gives us much food for thought.
Here are a few of the many take-home points I found interesting.
Perhaps the central point of the book is that flaws in preclinical study design, present
for decades, has led to a "reproducibility crisis;" i.e. the field is peppered with
flawed studies, many published in prestigious journals, that are difficult to dislodge
from public view. Failure to recognize flaws in such studies slows biomedical research
progress because future studies are predicated on the validity of these findings.
Harris gives credit to 2 studies appearing in the scientific literature a few years
ago as raising awareness of the problem. One, authored by John Ioannidis, is a sophisticated statistical analysis suggesting that most research findings are
false; I think I commented on this article in the early days of Evidence eMended,
but now I can't find it. (I need to do a better job of attaching labels to my postings!)
The second, by Begley and Ellis, is new to me. The authors, from a pharmaceutical company, found that only 6 out of
53 prominent preclinical studies had findings that could be replicated independently,
suggesting that the other 47 were based on flawed design or techniques.
Begley later came up with a list of six "red flags" for suspecting flawed preclinical research: lack of appropriate blinding, failure
to repeat basic experiments, presenting all the results (no cherry-picking the best
data!), using positive and negative controls, ensuring that reagents are valid, and
using appropriate statistical tests. Clinicians will recognize many of these as standard
criteria for good human research design and may be surprised that such problems are
widespread in preclinical research.
If you aren't becoming depressed yet, keep reading. Chavalarias, in conjunction with
our old friend Ioannidis, tabulated 235 forms of bias in biomedical research! (Speaking of bias, I should come clean here. Several years
ago, John Ionannidis' wife was a fellow in our infectious diseases fellowship program,
and I regard her very highly. By coincidence, the little bust of the Greek goddess
Hygeia next to my right ear in my photo on the Evidence eMended site was a gift from
her. It would probably be difficult for me to criticize her husband publicly.)
Nardone gave a comical (if it weren't sad) analogy of a problem of cell line contamination in cancer research, suggesting that the Marx Brothers had taken over cell culture labs! I also learned
a new term from Harris: HARKing (Hypothesizing After the Results are Known), also known as post hoc analysis, a topic
I've mentioned on occasion in the past.
Part of the problem clearly is a mindset in preclinical research that is entrenched
in outdated methods, and changing mindset doesn't happen overnight. Harris's Chapter
8, entitled "Broken Culture," points out the tremendous pressures facing biomedical
researchers today, competing for a small pot of research funding and against a perverse
set of incentives for tenure and promotion at academic organizations that stress quantity
rather than quality of studies, as well as too much focus on journals' "impact factor." In my opinion, journal impact factor has been hijacked into one of the worst tools
we have in academia today.
There is some light at the end of the tunnel. Instead of changing an entire culture,
Harris suggests focus on 4 semi-easy fixes in preclinical research: randomize animals
in interventional studies, ensure masking of laboratory personnel to treatment/test
assignments of test tubes or animals, don't change a study endpoint after an experiment
begins, and use an adequate sample size. Also, we have a number of good tools to aid
everyone right now. Check out the Retraction Watch website to get an idea for how often scientific publications are retracted. Also,
investigators have had guidelines for animal study design available for many years;
it's called ARRIVE, or Animal Research: Reporting of In Vivo Experiments.
Well, perhaps I've prattled on too long here, but I found this to be an enlightening
book, and actually a quick read if you're not taking too many notes as I did. While
the rest of you ponder what you've read here, and maybe consider reading the book,
I'm happy to go back to the pages of Macdonald's The Way Some People Die. No, it's not a medical novel, it's a prime example of noir fiction, with an opening
paragraph to die for! (Alas, I couldn't find the paragraph quoted anywhere online,
so maybe some of you will go find this book in addition to Harris's!)