Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
LetterLetters to the Editor

Evidence-Based Assessment of PET in Germany

Jos Kleijnen, Marie Westwood, Robert Wolff, Penny Whiting, Heike Raatz and Regina Kunz
Journal of Nuclear Medicine July 2012, 53 (7) 1166-1167; DOI: https://doi.org/10.2967/jnumed.112.106450
Jos Kleijnen
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Marie Westwood
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Robert Wolff
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Penny Whiting
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Heike Raatz
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Regina Kunz
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
  • PDF
Loading

TO THE EDITOR: In his Invited Perspective, Professor Wolfgang Weber calls for biostatisticians and imagers to “reflect on one’s own deficiencies” in order to make progress in the evaluation of evidence about imaging (1). Weber points out several deficiencies on the side of the biostatisticians but neglects to examine any deficiencies in his own perspective. In our view, one barrier to communication between clinicians and methodologists is misunderstanding what the other side is doing, and we would like to address some of these apparent misunderstandings.

Weber points out that the conclusions of one agency, the German Institute for Quality and Efficiency in Health Care (IQWiG), on the use of PET/CT in various malignant diseases are in conflict with clinical practice in the United States and Europe. He argues that the use of the quality assessment of diagnostic accuracy studies (QUADAS) instrument leads to the wrong conclusion that there are insufficient data to determine the diagnostic accuracy of 18F-FDG PET. In particular Weber argues that health technology assessment agencies or the reviewers commissioned by them “do not judge the [clinical] content of the reviewed publications but rather assess their quality solely by formal criteria as described by QUADAS.” This remark can actually be considered defamatory, and being reviewers for IQWiG, we hope it was not meant as it comes across. Weber is very well aware that both IQWiG and its external reviewers get input from clinicians who advise them in every single project. Indeed, guidance on the use of both QUADAS and its successor QUADAS-2 requires reviewers to consider the relevance of criteria to the clinical topic and to provide topic-specific criteria where needed.

As authors of both QUADAS and QUADAS-2 (QUADAS-2 replaced the original QUADAS last year), we are also very well aware that the QUADAS instrument has limitations, as indeed do all risk-of-bias tools (2,3). Some limitations of QUADAS have been improved on in QUADAS-2. Other limitations, such as most of those mentioned by Weber, are clearly described in the QUADAS publication and advice is given to reviewers on how to handle them. Examples are given for each item of situations in which the item does not apply. Weber also makes plainly wrong statements about the development of QUADAS. The experts who participated in the Delphi procedure are not “anonymous” but clearly mentioned and acknowledged in the QUADAS paper (2). The development of QUADAS was also not solely dependent on expert opinion. Evaluations of existing tools and of the empiric evidence about the sources of bias and variation in diagnostic accuracy research were performed before, and informed, the Delphi process (4,5).

As Weber correctly points out, there are indeed many different study designs that can be relevant for particular diagnostic questions, and IQWiG allowed a range of study types to be considered in order to evaluate PET. Although some QUADAS items may be applicable in other types of diagnostic studies, it is clearly a tool that is intended for accuracy studies. It is not only in diagnostics that there are clinical situations that make the design of the perfect study (in the sense of having low risk of bias) difficult or impossible. Some therapeutic interventions are impossible to evaluate in a double-blind way, or even in a randomized trial, and health technology assessment agencies are well aware of the real-life limitations to designing the perfect study. Using the “best available evidence” is a pragmatic approach in such situations. The IQWiG assessments aimed to use (if available) both randomized and controlled observational studies assessing the benefits of PET and, in addition, searched for diagnostic accuracy studies and studies addressing prognosis.

Weber argues that the lack of evidence on the clinical benefit of PET and PET/CT derives from the lack of a requirement for formal assessment at the time of introduction. Although this may be true, it does not invalidate the case for formal evaluation. Long-established diagnostic devices have been shown to be ineffective or even harmful for the patient’s overall management when finally tested in randomized clinical trials (6). This also holds true in applications of PET for which a randomized trial found no patient-relevant benefit (7).

Few countries can still afford everything in medicine that is available. With budget constraints, tough decisions have to be made about what is of the best value for the money. Whenever a choice is made to use one technology, it inevitably means that something else will be displaced. Health technology assessment agencies have the unenviable job of helping the decision-making process that leads to such choices. Good-quality evidence is the crucial element in making these informed decisions. Clinicians (or imagers, in the case of PET) have the unenviable job of helping to generate that evidence. We fully agree with Weber’s goal of fruitful collaboration; Wolfgang, the door is open for your help in our next imaging project!

Footnotes

  • Published online May 18, 2012.

  • © 2012 by the Society of Nuclear Medicine, Inc.

REFERENCES

  1. 1.↵
    1. Weber WA
    . Is there evidence for evidence-based medical imaging? J Nucl Med. 2011;52(suppl 2):74S–76S.
    OpenUrlFREE Full Text
  2. 2.↵
    1. Whiting P,
    2. Rutjes AW,
    3. Reitsma JB,
    4. Bossuyt PM,
    5. Kleijnen J
    . The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol. 2003;3:25.
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Whiting PF,
    2. Rutjes AW,
    3. Westwood ME,
    4. et al
    . QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155:529–536.
    OpenUrlCrossRefPubMed
  4. 4.↵
    1. Whiting P,
    2. Rutjes AWS,
    3. Reitsma JB,
    4. Glas AS,
    5. Bossuyt PMM,
    6. Kleijnen J
    . Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med. 2004;140:189–202.
    OpenUrlCrossRefPubMed
  5. 5.↵
    1. Whiting P,
    2. Rutjes AWS,
    3. Dinnes J,
    4. Reitsma JB,
    5. Bossuyt PMM,
    6. Kleijnen J
    . A systematic review finds that diagnostic reviews fail to incorporate quality despite available tools. J Clin Epidemiol. 2005;58:1–12.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Harvey S,
    2. Harrison DA,
    3. Singer M,
    4. et al
    . Assessment of the clinical effectiveness of pulmonary artery catheters in management of patients in intensive care (PAC-Man): a randomised controlled trial. Lancet. 2005;366:472–477.
    OpenUrlCrossRefPubMed
  7. 7.↵
    1. Tsai CS,
    2. Lai CH,
    3. Chang TC,
    4. et al
    . A prospective randomized trial to study the impact of pretreatment FDG-PET for cervical cancer patients with MRI-detected positive pelvic but negative para-aortic lymphadenopathy. Int J Radiat Oncol Biol Phys. 2010;76:477–484.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine: 53 (7)
Journal of Nuclear Medicine
Vol. 53, Issue 7
July 1, 2012
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Evidence-Based Assessment of PET in Germany
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Evidence-Based Assessment of PET in Germany
Jos Kleijnen, Marie Westwood, Robert Wolff, Penny Whiting, Heike Raatz, Regina Kunz
Journal of Nuclear Medicine Jul 2012, 53 (7) 1166-1167; DOI: 10.2967/jnumed.112.106450

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Evidence-Based Assessment of PET in Germany
Jos Kleijnen, Marie Westwood, Robert Wolff, Penny Whiting, Heike Raatz, Regina Kunz
Journal of Nuclear Medicine Jul 2012, 53 (7) 1166-1167; DOI: 10.2967/jnumed.112.106450
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Footnotes
    • REFERENCES
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Business Model Beats Science and Logic: Dosimetry and Paucity of Its Use
  • Determining PSMA-617 Mass and Molar Activity in Pluvicto Doses
  • The Value of Functional PET in Quantifying Neurotransmitter Dynamics
Show more Letters to the Editor

Similar Articles

SNMMI

© 2025 SNMMI

Powered by HighWire