What does a “significant difference” between treatments mean?
Well, this is a trick question, because ‘significant difference’ can have several meanings.
First, it can mean a difference that is actually important to the patient. However, when the authors of research reports state that there is a ‘significant difference’ they are often referring to ‘statistical significance’. And ‘statistically significant differences’ are not necessarily ‘significant’ in the everyday sense of the word. A difference between treatments which is very unlikely to be due to chance – ‘a statistically significant difference’ – may have little or no practical importance.
Take the example of a systematic review of randomized trials comparing the experiences of tens of thousands of healthy men who took an aspirin a day with the experiences of tens of thousands of other healthy men who did not take aspirin. This review found a lower rate of heart attacks among the aspirin takers and the difference was ‘statistically significant’ – that is, it was unlikely to be explained by the play of chance.
But that doesn’t mean that it is necessarily of practical importance. If a healthy man’s chance of having a heart attack is already very low, taking a drug to make it even lower may be unjustified, particularly since aspirin has side-effects, some of which – bleeding, for example – are occasionally lethal. [1]On the basis of the evidence from the systematic review we can estimate that, if 1,000 men took an aspirin a day for ten years, five of them would avoid a heart attack during that time, but three of them would have a major haemorrhage.
Next: Obtaining large enough numbers in fair tests of treatments
GET-IT Jargon Buster
About GET-IT
GET-IT provides plain language definitions of health research terms