Cartoon and blog about how poorly performed systematic reviews and meta-analyses may misrepresent the truth.Key Concepts addressed:
- 2-12 Single studies can be misleading
- 2-8 Consider all of the relevant fair comparisons
- 2-9 Reviews of fair comparisons should be systematic
Goldilocks is right: that review is FAR too complicated. The methods section alone is 652 pages long! Which wouldn’t be too bad, if it weren’t that it is a few years out of date. It took so long to do this review and go through rigorous enough quality review, it was already out of date the day it was released. Something that happens often enough to be rather disheartening.
When methodology for systematic reviewing gets overly rococo, the point of diminishing returns will be passed. That’s a worry, for a few reasons. For one, it’s inefficient and more reviews could be done with the resources. Secondly, more complex methodology can both be daunting, and it can be hard for researchers to accomplish with consistency. Thirdly, when a review gets very elaborate, reproducing or updating it isn’t going to be easy either.
You need to check the methods section of every review before you take its conclusions seriously – even when it claims to be “evidence-based” or systematic. People can take far too many shortcuts. Fortunately, it’s not often that a review gets as bad as the second one Goldilocks encountered here. The authors of that review decided to include only one trial for each drug “in order to keep the tables and figures to a manageable size.” Gulp!
Getting to a good answer also quite simply takes some time and thought. Making real sense of evidence and the complexities of health, illness and disability is often just not suited to a “fast food” approach. As the scientists behind the Slow Science Manifesto point out, science needs time for thinking and digesting.
To cover more ground, people are looking for reasonable ways to cut corners, though. There are many kinds of rapid review, including reliance on previous systematic reviews for new reviews. These can be, but aren’t always, rigorous enough for us to be confident about their conclusions.
You can see this process at work in the set of reviews discussed at Statistically Funny a few cartoons ago. Review number 3 there is in part based on review number 2 – without re-analysis. And then review number 4 is based on review number 3.
So if one review gets it wrong, other work may be built on weak foundations. Li and Dickersin suggest this might be a clue to the perpetuation of incorrect techniques in meta-analyses: reviewers who got it wrong in their review, were citing other reviews that had gotten it wrong, too. (That statistical technique, by the way, has its own cartoon.)
Luckily for Goldilocks, the bears had found a third review. It had sound methodology you can trust. It had been totally transparent from the start – included in PROSPERO, the international prospective register for systematic reviews. Goldilocks can get at the fully open review quickly via PubMed Health, and its data are in the Systematic Review Data Repository, open to others to check and re-use. Ahhh – just right!
Text reproduced from http://statistically-funny.blogspot.co.uk/. Cartoons are available for use, with credit to Hilda Bastian.
Browse Key ConceptsBack to Library
GET-IT Jargon Buster
GET-IT provides plain language definitions of health research terms