
The meeting of the Society of Nuclear Medicine in Toronto highlighted the importance of PET to the medical community and to the patients we serve. The role of PET is proving to be much more than a diagnostic tool: Data presented at the meeting demonstrated the importance of PET as a predictive assay of treatment response and as a means of planning and monitoring therapy. Unfortunately, a side issue at the meeting served to identify an important emerging threat to the imaging community. That issue is the refusal of the Ontario government to publicly fund the introduction of clinical PET services because of a perceived absence of supporting data, leading to a requirement that the provincial nuclear medicine community complete a series of clinical trials to “prove” effectiveness. PET is funded in the remainder of the country.
This decision reflects the view of Cancer Care Ontario and the Ontario Ministry of Health that no literature supports the clinical effectiveness of PET. Indeed, at the time of the SNM meeting, a senior member of Cancer Care Ontario was quoted in the press as saying that the technology should not be introduced because it “could result in the misdiagnosis of patients” (1). This belief is based on a Health Technology Assessment (HTA) review performed in 2001 (2). HTA is “a multidisciplinary field that studies the medical, social, ethical, and economic implications of the development, use and diffusion of health technologies” (3). This definition is quoted in an overview by Rotstein and Laupacis (4) arguing that HTA (using PET as a case study) is a bridge between science and policy. In other words, HTA is a policy tool, often used to justify negative funding decisions, rather than a rigorous scientific methodology. HTA has become an important tool for payers in evaluating the introduction of new interventions and increasingly is being applied to imaging technologies.
If one examines the multiple HTA reports on PET, the lack of scientific rigor is apparent. HTA reports are not peer reviewed and appear to lack commonality of process between and within reviews. Our group has examined 12 HTA reports from 9 different jurisdictions, including the United States, Quebec, Scotland, and Sweden, many of which are referenced by ICES (2). The number of publications reviewed in these HTA reports ranged from 150 to 1,002, with 5 reports not stating the number. The reports included publications that evaluated PET scanners or coincidence units or simply did not state the equipment evaluated. Overall, PET was concluded to be an effective modality in 7 reports and ineffective in 3, although all noted a need for further research. It is difficult to have confidence in a methodology that can produce such disparate results from an evaluation of the same literature.
Unfortunately, HTA algorithms have been designed to assess the evaluation and efficacy of new drugs and medical interventions, although even in these areas there remain questions about the effectiveness of the reviews (5,6). The methodologies may not be particularly well suited to many of these clinical issues, but they are totally unsuited to the evaluation of the clinical importance of imaging tests. For example, one outcome assessed has been the survival benefit conferred by an imaging test. This outcome is unlikely, is not often seen in oncology management, and takes no account of the importance of a negative result in patient care, the role of imaging in treatment planning and assessment of response, the reductions in downstream costs such as saved operations due to upstaging, and the contribution of imaging to diagnostic confidence. Until methodologies are developed to incorporate these endpoints—recognizing that some already have been successfully introduced for PET—the results of HTA of imaging modalities will remain arbitrary and capricious.
The reality of clinical practice is that the results of imaging investigations are integrated with all data available to the physician, including clinical examinations, laboratory investigations, and past medical history, to guide his or her decision. Rarely is a single imaging test taken in isolation, and rarely is an intervention decided on as a result of that single imaging test. The appropriateness of applying the same criteria designed for evaluating drug interventions to imaging studies has been elegantly questioned by Højgaard (7), who provides the context for the poor correlations found between imaging HTAs. The absence of the involvement of the broader imaging community in the HTA process and the design of appropriate standards is an indictment of that process.
There is no doubt that imaging technologies should be evaluated for effectiveness. This is currently achieved by the peer review process, which is the standard of evidence required by our community. The imposition of external controls by groups unfamiliar with our technologies and unprepared to engage the community is a threat to patient care and to the appropriate dissemination of medically important tests, as evidenced by the Ontario example. This is a challenge that we must accept and use to develop our own criteria for success.