Reducing biases in judging unanticipated effects of treatments
As with anticipated effects of treatments, biases and the play of chance must be reduced in assessing suspected unanticipated effects.Key Concepts addressed:
Important unanticipated effects of treatments are often first suspected by people using or prescribing treatments. As with anticipated effects of treatments, steps must be taken to reduce biases and the play of chance in assessing suspected unanticipated effects.
It is only to be expected that unanticipated effects of treatments will emerge when new treatments are introduced more widely. Initial tests – for example, those required to license new drugs for marketing – cover at most a few hundred or a few thousand people treated for a few months. Only relatively frequent and short-term unanticipated effects are likely to be picked up at this stage.
Rare treatment effects, or those that take some time to develop, will not be discovered until treatment tests have lasted long enough or until there has been more widespread use of treatments. Moreover, new treatments will often be used in people who may differ in important ways from those who participated in the original tests. They may be older or younger, of a different sex, more or less ill, living in different circumstances, or suffering from other health problems in addition to the condition at which the treatment is targeted. These differences may modify treatment effects, and new, unanticipated effects may emerge (see special issue of BMJ 3 July 2004).
Detection and verification of unanticipated effects, whether adverse (or beneficial), usually occur rather differently from the methods used to assess hoped-for effects of new treatments. Unanticipated effects of treatments are sometimes suspected initially by health professionals or patients. (Venning 1982) Identifying which among these initial hunches reflect real effects of treatments poses a challenge.
If the unanticipated effect of a treatment is very striking and occurs quite often after the treatment has been used, it may be noticed spontaneously by health professionals or patients. For example, babies born without limbs are almost unheard of, so when a sudden increase in their numbers occurred in the 1960s it naturally raised concerns. All mothers of such babies had used a newly marketed anti-nausea drug – thalidomide – prescribed during early pregnancy, so this was likely to be the cause and little further assessment was necessary (McBride 1961). Unanticipated beneficial effects of drugs are often detected in similar ways, for example, when it was found that a drug to treat psychosis also lowered cholesterol (Goodwin 1991).
When such striking relationships are noticed, they often turn out to be confirmed as real unanticipated effects of treatment (Venning 1982). However, a lot of hunches about unanticipated effects of treatment are based on far less convincing evidence. So, as with tests designed to detect hoped-for effects of treatments, planning tests to confirm or dismiss less striking suspected unanticipated effects involves avoiding biased comparisons [EE 2.0]. Studies to test whether suspected unanticipated effects of treatment are real must observe the principle of comparing ‘like with like’. Random allocation to treatments is the ideal way to accomplish this. Only rarely, however, can suspected treatment effects be investigated by further analysis or follow-up of people who were randomly allocated to treatments before they were given (Hemminki and McPherson 1997). The challenge is therefore to assemble unbiased comparison groups in other ways, often using information collected routinely during health care.
In these studies, it actually helps that the suspect effects were not anticipated at the time that treatment decisions were taken. This is because it means that no account could have been taken of the risk of the suspect condition at the time people were being selected differentially for treatment: the unanticipated effect is usually a different condition or disease from the condition or disease for which the treatment was prescribed (Vandenbroucke 2004a).
For example, when hormone replacement therapy (HRT) was introduced for treating menopausal symptoms, a woman’s risk of developing venous thrombosis was unlikely to have been taken into account because most doctors and women thought it was irrelevant. There was therefore no reason to expect that women who were prescribed HRT differed in their risk of developing venous thrombosis from those who did not receive the drug. The basis was thus established for fair tests, and these showed that HRT increases the risk of venous thrombosis.
When a suspected unanticipated effect relates to a treatment for a common health problem (such as heart attack) but does not occur very often with the new treatment (or is not completely relieved by it), large-scale surveillance of people receiving the treatment is needed to detect the unanticipated effect. For example, although some people thought that aspirin might reduce the risk of heart attack and began fair tests of this theory in patients in the late 1960s (Elwood et al. 1974), most people would have thought that the theory was highly implausible. The breakthrough came when a large study was done to detect unanticipated adverse effects of drugs: researchers noticed that people admitted to hospital with heart attacks were less likely to have recently taken aspirin than apparently similar patients (Boston Collaborative Drug Surveillance Group 1974). These findings were consistent with those of a fair test, in which people had been allocated at random to receive or not receive aspirin after heart attack. The two reports were published back-to-back in the same issue of the British Medical Journal.
The ground rules for detecting and investigating unanticipated effects of treatments were first set out clearly in the late 1970s (Jick 1977; Colombo et al. 1977). They drew on the collective experience of investigating unanticipated effects which had accumulated following the thalidomide disaster. The requirements for one important type of research, case-control studies of possible adverse effects of treatment, were laid down in a paper based on the experiences of researchers in Boston and Oxford (Jick and Vessey 1978). With many powerful treatments introduced since that time, this aspect of fair tests of treatments remains just as challenging and important today as it did then (Vandenbroucke 2004b; Vandenbroucke 2006; Papanikolaou et al. 2006).
It is important to recognise that individual reports suggesting or dismissing suspicions about unanticipated effects of treatments can be misleading. As with all other fair tests of treatment, possible unanticipated effects of treatment must be investigated using systematic reviews of all the relevant evidence, such as those that confirmed the relationship between HRT and heart disease, stroke and breast cancer (Hemminki and McPherson 1997; Collaborative Group on Hormonal Factors in Breast Cancer 1997).
The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).
Boston Collaborative Drug Surveillance Group (1974). Regular aspirin intake and acute myocardial infarction. BMJ 1:440-443.
Collaborative Group on Hormonal Factors in Breast Cancer (1997). Breast cancer and hormone replacement therapy: collaborative reanalysis of data from 51 epidemiological studies of 52,705 women with breast cancer and 108,411 women without breast cancer. Lancet 350:1047-1059 .
Colombo F, Shapiro S, Slone D, Tognoni G, eds (1977). Epidemiological Evaluation of Drugs. Amsterdam: Elsevier/North Holland Biomedical Press, 1977.
Elwood PC, Cochrane AL, Burr ML, Sweetnam PM, Williams G, Welsby E, Hughes SJ, Renton R (1974). A randomised controlled trial of acetyl salicylic acid in the secondary prevention of mortality from myocardial infarction. BMJ 1:436-440.
Goodwin JS (1991). The empirical basis for the discovery of new therapies. Perspectives in Biology and Medicine 35:20-36.
Hemminki E, McPherson K (1997). Impact of postmenopausal hormone therapy on cardiovascular events and cancer: pooled data from clinical trials. BMJ;315:149-153.
Jick H (1977). The discovery of drug-induced illness. New England Journal of Medicine 296:481-485.
Jick H, Vessey M (1978). Case-control studies in the evaluation of drug-induced illness. American Journal of Epidemiology 107:1-7.
McBride WG (1961). Thalidomide and congenital abnormalities. Lancet 2:1358.
Papanikolaou PN, Christidi GD, Ioannidis JPA (2006). Comparison of evidence on harms of medical interventions in randomized and nonrandomsized studies. CMAJ 174:635-641.
Vandenbroucke JP (2004a). When are observational studies as credible as randomised trials? Lancet 363:1728-1731.
Vandenbroucke JP (2004b). Benefits and harms of drug treatments. BMJ 329:2-3.
Vandenbroucke JP (2006). What is the best evidence for determining harms of medical treatment? CMAJ 174:645-646.
Venning GR (1982).Â Validity of anecdotal reports of suspected adverse drug reactions: the problem of false alarms.Â BMJ 284:249-254.
Read more about the evolution of fair comparisons in the James Lind Library.
Browse Key ConceptsBack to Library
GET-IT Jargon Buster
GET-IT provides plain language definitions of health research terms