Estimating the risk of cancer induction by low-dose radiation (e.g., diagnostic) remains one of the most contentious issues in modern science and has engendered often-strident debate (1,2), particularly with respect to the linear no-threshold (LNT) dose–response model. As John Boice, Jr., president of the National Council on Radiation Protection and Measurements (NCRP), has aptly stated (3), “LNT is not TNT, but differences in opinions sometimes appear explosive!” A critical assessment of the LNT model and a consideration of alternative dose–response models are presented in the article by Siegel, Pennington, and Sacks in this issue of The Journal of Nuclear Medicine (3). The article highlights the uncertainty associated with the LNT model and LNT model–based risk factors.
In this invited perspective, we provide some background on the controversy regarding the validity of the LNT model and, specifically, its application in medicine. Fundamentally, the LNT model implies a uniform cancer risk per unit dose from higher to lower doses, meaning that, for example, a radiation dose of 10 mSv has See page 1
one hundredth the risk from a radiation dose of 1,000 mSv. Because the LNT model assumes there is no threshold dose for radiation-induced cancer, even a dose of as low as 0.1 mSv is associated with a nonzero excess risk (i.e., one hundredth the risk from 10 mSv).
The number of fatal excess-cancer cases in an irradiated population using the LNT model is calculated in a deceptively simple manner: as the number of persons exposed multiplied by the effective dose (mSv or rem) per person multiplied by the excess relative risk (/mSv or /rem). A widely cited excess relative risk (ERR) value is that recommended by NCRP report 115 (5), 5 × 10−5 per person per mSv (or 5 × 10−4 per person per rem). Thus, if each person in a population of one million received an effective dose of 10 mSv (1 rem), the expected number of fatal excess-cancer cases in this population over their remaining lifespan would be 500 (1 × 106 persons multiplied by 10 mSv multiplied by 5 × 10−5 person/mSv). This compares to a spontaneous, or background, lifetime incidence of about 300,000 (30%) otherwise occurring in such a population and an increase in overall incidence of only 0.17% [(500/300,000) × 100%].
Siegel et al. provide a detailed discussion of why the assumptions of the LNT model are counterintuitive and difficult to reconcile with the biology of DNA repair and the well-established decrease in radiation toxicity by dose fractionation in clinical radiation oncology. Nevertheless, the LNT model is currently recommended by advisory bodies such as the NCRP (6,7), the International Council on Radiation Protection (8), and the United Nations Scientific Committee on the Effects of Atomic Radiation (9) and has been adopted by regulatory agencies such as the Nuclear Regulatory Commission (10).
The main reasons for the acceptance of the LNT model are that it is simple, it fits data from several observational studies on radiation exposure and the development of cancer fairly well (7), and no alternative model has convincingly been shown to provide a better fit to these data. However, it is important to realize that a consistent mathematical fit to available dose–response data should not be construed as a validation of such a model. Among many others, Siegel et al. argue that the data and associated analyses supporting the LNT model are actually refuted by some epidemiologic and experimental studies and that this model overstates the risk of radiation carcinogenesis at doses on the order of 100 mSv (10 rem) or less and does not account for creditable evidence of a threshold for cancer induction, that is, a nonzero radiation dose below which there is no increased risk of cancer (2,11,12). The validity, applicability, and utility of the LNT model and of alternative models thus remain highly controversial (1,2).
The specific challenge in assessing the risk of cancer induction among patients undergoing diagnostic imaging studies is that there are actually very few, if any, reliable human data quantifying an increase in cancer incidence after exposure to diagnostic radiation doses (i.e., less than ∼100 mSv [10 rem]). The risks from low doses of radiation are therefore extrapolated by some investigators from the apparently linear relationship between cancer incidence and radiation exposure observed at markedly higher doses. The confidence intervals for these extrapolated risks are typically broad, however, and critically depend on the model used to extrapolate the data (as discussed by Siegel et al.) Because of these uncertainties, typical radiation doses from medical imaging have therefore been interpreted as completely safe by some or potentially dangerous by others. No prospective epidemiologic studies with appropriate nonirradiated controls have definitively demonstrated either the adverse effects or the hormetic effects of radiation doses under 100 mSv (10 rem) in humans, and current estimates of the risks of low-dose radiation indicate that very large-scale epidemiologic studies with long-term follow-up would be needed to actually quantify any such risk or benefit; such studies may be logistically and financially prohibitive.
The most creditable dose–response data for radiation carcinogenesis in humans mainly involve doses on the order of 1 Sv (100 rem) or greater—that is, 1–2 orders of magnitude greater than those encountered in diagnostic imaging studies. Such data include, most notably, the A-bomb survivor follow-up data. Pierce and Preston (13), for example, published an analysis of the A-Bomb Radiation Effects Research Foundation data on cancer risks among survivors receiving doses of less than 500 mSv (50 rem), with approximately 7,000 cancer cases among about 50,000 low-dose survivors. They concluded that cancer risks are not overestimated by linear risk estimates computed over the dose range 50–100 mSv, with a statistically significant nonzero risk in the range 0–100 mSv (0–10 rem) and an upper confidence limit of 60 mSv (6 rem) on any possible threshold.
A handful of high-profile studies have, however, reported cancer risks from exposure to relatively low doses. The U.K. CT Study (14), a record-linkage study of leukemia and myelodysplastic syndrome (MDS) and of brain cancer incidence after CT scans of 178,000 pediatric patients (0–21 y old), reported ERRs of 36/Gy (0.36/rad) for leukemia and myelodysplastic syndrome and of 23/Gy (0.23/rad) for brain cancer. Even allowing for the higher cancer risk associated with irradiation in childhood, these values are high when compared with the overall cancer ERR recommended by the NCRP for the general population (0.05/Gy) (5), and critical evaluation of this study cited absence of the scan parameters and therefore organ doses for individual patients. Another potentially confounding factor in the results is reverse causation (15): because the children in this study were referred for imaging for some medical problem, they may have been at a naturally increased risk for cancer because of their underlying medical condition rather than at increased risk because of any diagnostic irradiation.
In the International Nuclear Workers Study (16), which included a cohort of over 300,000 workers (over 8.2 million person-years) in the nuclear industry with detailed external dose data (mean dose, 21 mGy [2.1 rad]), the ERR for all cancers was 0.51/Gy (95% confidence interval, 0.23–0.82/Gy) (0.0051/rad [95% confidence interval, 0.0023–0.0082/rad]). In addition to possible uncertainty in personnel dose estimates, smoking and occupational asbestos exposure were identified as potential confounding factors; however, exclusion of deaths from lung cancer and pleural cancer did not affect the association between cancer risk and occupational radiation exposure. Although the ERR estimate for solid cancers in this study, 0.47/Gy (0.0047/rad), was higher than that for adults in the study of A-bomb survivors by Preston et al. (17), 0.32/Gy (0.0032/rad), these estimates were judged to be statistically compatible.
Importantly, even if one concedes the validity of the LNT model, it cannot be applied reliably to individuals but only to large populations (8)—that is, populations sufficiently large to average out interindividual differences in radiation sensitivity related to sex, age, diet, and other lifestyle factors and those related to intrinsic biology. Clinical care is clearly the least forgiving of the large uncertainty in risk factors, regardless of the model from which they were derived, and application with certitude of population-derived risk factors to individual patients or even defined patient populations is simply not justified. Although the debate over LNT will not be resolved anytime soon, one point should be abundantly clear, as reinforced by the article by Siegel et al.: the scale of the associated uncertainties is such that it is not appropriate to use such risk factors for clinical decision making and the management of individual patients.
Footnotes
Published online Oct. 6, 2016.
- © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
REFERENCES
- Received for publication September 22, 2016.
- Accepted for publication September 23, 2016.