Resource Link: View the Cartoon
Cherry picking results can be misleading.
Key Concepts addressed:
Unfortunately, little Suzy isn’t the only one falling for the temptation to dismiss or explain away inconvenient performance data. Healthcare is riddled with this, as people pick and choose studies that are easy to find or that prove their points.
In fact, most reviews of healthcare evidence
don’t go through the painstaking processesneeded to systematically minimize bias and show a fair picture. You can read more about how it’s done thoroughly in this explanation of systematic reviews at PubMed Health.
A fully systematic review very specifically lays out a question and how it’s going to be answered. Then the researchers stick to that study plan, no matter how welcome or unwelcome the results. They go to great lengths to find the studies that have looked at their question, and they analyze the quality and meaning of what they find.
The researchers might do a meta-analysis – a statistical technique to combine the results of studies (explained
here at Statistically Funny). But you can have a systematic review without a meta-analysis – and you can do a meta-analysis of a group of studies without doing a systematic review.
To help make it easier for people to sift out the fully systematic from the less thorough reviews, a group of us, led by
Elaine Beller, have just published guidelines for abstracts of systematic reviews. It’s part of the PRISMA Statement initiative to improve reporting of systematic reviews.
A quick way to find systematic reviews is the National Library of Medicine’s
PubMed Health. It’s a one-stop shop of systematic reviews, information based on systematic reviews and key resources to help you understand clinical effectiveness research. You can read more about PubMed Health here.
Do systematic reviews entirely solve the problem Julie saw with those school grades? Unfortunately, not always.
Many trials aren’t even published at all, and no amount of searching or digging can get to them. This happens even when the trial has good news, but it happens more often with disappointing results. The “fails” can be very well-hidden. Yes, it’s as bad as it sounds: Ben Goldacre explains the problem and its consequences here.
You can help by signing up to the
All Trials campaign – please do, and encourage everyone you know to do it too. Healthcare interventions simply won’t all be able to have reliable report cards until the trials are not just done, but easy to get at.
Text reproduced from http://statistically-funny.blogspot.co.uk/ . Cartoons are available for use, with credit to Hilda Bastian.
Browse Key Concepts
Back to Library
GET-IT Jargon Buster
Select a term acceptability
allocation schedule concealment
certainty of the evidence
change in cost
cluster randomized study
comparing like with like
conflicts of interests
controlled before-after study
diagnostic odds ratio
diagnostic test accuracy
disease progression bias
dramatic treatment effect
evidence to decision framework
fair comparisons of treatments
false negative test result
false negative test result (duplicate)
false positive test result
false positive test result (duplicate)
high certainty of the evidence
incremental cost-effectiveness ratio
indeterminate diagnostic test result
interrupted time series study
level of evidence
loss to follow-up
low certainty of the evidence
low risk of bias
moderate certainty of the evidence
modified intention-to-treat analysis
multiple statistical comparisons
natural course of health problems
negative predictive value
number needed to harm
number needed to screen
number needed to treat
outcome measured on a scale
paired study design for diagnostic tests
parallel group study
phase 1 trial
phase 2 trial
phase 3 trial
phase 4 trial
play of chance
positive predictive value
protocol or study plan
quality-adjusted life years
reference standard test
regulation of research
repeated measures study
risk of bias
shared decision making
single participant trial
smallest important difference
strength of recommendation
summary of findings
treatment comparison group
true negative test result
true positive test result
type of study
unit of analysis error
very low certainty of the evidence
GET-IT provides plain language definitions of health research terms