ViewpointRandomised comparisons of medical tests: sometimes invalid, not always efficient
Section snippets
Randomised comparisons
Randomised comparisons have several advantages over other methods of comparing medical interventions. Random assignment of patients to the strategies under study should prevent any bias in the selection of patients: differences at baseline between groups of patients have to be attributed to chance.4, 5 This basic principle opens up the application of experimental statistical design, such as testing for significance and calculating confidence intervals. Randomised controlled trials are also
Trials of a single test
Although trials are often undertaken for issues in therapy and prevention, there is no a priori reason why they should not be used to resolve difficulties in diagnosis and monitoring. Yet one should keep in mind how tests affect patient outcome.8 The most common way tests can affect patient outcome is when the information from these tests is used to guide decisions to start, withhold, modify, or stop treatment.
Consider a hypothetical situation in which current clinical management consists of
Trials comparing two tests
Concerns of validity and efficiency also apply when two tests are compared. As an example, consider scintigraphy in patients with ischaemic heart disease, and whether or not to schedule patients for percutaneous transluminal coronary angioplasty. Scintigraphy can assess the functional impact of coronary lesions but, because of the costs, risks, and side-effects of percutaneous transluminal coronary angioplasty only patients with a lesion that sufficiently affects perfusion are referred for
Other threats to validity
There are additional difficulties for those who want to translate trial results to clinical practice. In Figure 1, Figure 2 a clear and prespecified link between test results and management decisions is shown. Test-positive patients received one treatment, test-negative patients another. In some trials, such a link is absent. A recent trial12 reported comparable pregnancy rates in couples randomised to a subfertility work-up with a post-coital test, or to the work-up without post-coital test,
References (19)
- et al.
The role of before-after studies of therapeutic impact on the evaluation of diagnostic technologies
J Chron Dis
(1986) - et al.
Traditional health outcomes in the evaluation of diagnostic tests
Acad Radial
(1999) - et al.
Randomised controlled trial of magnetic resonance pelvimetry in breech presentation at term
Lancet
(1997) - et al.
Magnetic resonance pelvimetry in breech presentation
Lancet
(1998) - et al.
A meta-analysis of the therapeutic role of oil-soluble contrast media at hysterosalpingography: a surprising result?
Fertil Steril
(1994) - et al.
The efficacy of diagnostic imaging
Med Dec Mak
(1991) Evaluating and comparing imaging techniques: a review and classification of study designs
Br J Radiol
(1987)Clinical trials: a practical approach
(1983)- et al.
Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials
JAMA
(1995)