Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Research ArticleThe State of the Art
Open Access

Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem

Babak Saboury, Tyler Bradshaw, Ronald Boellaard, Irène Buvat, Joyita Dutta, Mathieu Hatt, Abhinav K. Jha, Quanzheng Li, Chi Liu, Helena McMeekin, Michael A. Morris, Peter J.H. Scott, Eliot Siegel, John J. Sunderland, Neeta Pandit-Taskar, Richard L. Wahl, Sven Zuehlsdorff and Arman Rahmim
Journal of Nuclear Medicine February 2023, 64 (2) 188-196; DOI: https://doi.org/10.2967/jnumed.121.263703
Babak Saboury
1Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tyler Bradshaw
2Department of Radiology, University of Wisconsin–Madison, Madison, Wisconsin;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ronald Boellaard
3Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Irène Buvat
4Institut Curie, Université PSL, INSERM, Université Paris–Saclay, Orsay, France;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joyita Dutta
5Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mathieu Hatt
6LaTIM, INSERM, UMR 1101, University of Brest, Brest, France;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Abhinav K. Jha
7Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Quanzheng Li
8Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chi Liu
9Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Helena McMeekin
10Department of Clinical Physics, Barts Health NHS Trust, London, United Kingdom;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael A. Morris
1Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peter J.H. Scott
11Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eliot Siegel
12Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, Baltimore, Maryland;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
John J. Sunderland
13Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Neeta Pandit-Taskar
14Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Richard L. Wahl
15Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sven Zuehlsdorff
16Siemens Medical Solutions USA, Inc., Hoffman Estates, Illinois; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arman Rahmim
17Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • PDF
Loading

Abstract

Trustworthiness is a core tenet of medicine. The patient–physician relationship is evolving from a dyad to a broader ecosystem of health care. With the emergence of artificial intelligence (AI) in medicine, the elements of trust must be revisited. We envision a road map for the establishment of trustworthy AI ecosystems in nuclear medicine. In this report, AI is contextualized in the history of technologic revolutions. Opportunities for AI applications in nuclear medicine related to diagnosis, therapy, and workflow efficiency, as well as emerging challenges and critical responsibilities, are discussed. Establishing and maintaining leadership in AI require a concerted effort to promote the rational and safe deployment of this innovative technology by engaging patients, nuclear medicine physicians, scientists, technologists, and referring providers, among other stakeholders, while protecting our patients and society. This strategic plan was prepared by the AI task force of the Society of Nuclear Medicine and Molecular Imaging.

  • artificial intelligence
  • trustworthy
  • nuclear medicine
  • ecosystem

Medicine uses science, practical wisdom, and the best available tools in the art of compassionate care. The necessity of dealing with maladies has motivated physicians to incorporate inventions into medical practice to decrease or eliminate patient suffering. During the past two centuries, along with technologic revolutions, new medical devices have become the standard of care, from the stethoscope and electrocardiogram to cross-sectional imaging (Fig. 1). The stethoscope, which arose out of the first industrial revolution, is so pervasive that it has become the symbol of health-care professionals today. Compared with other medical equipment, it has the highest positive impact on the perceived trustworthiness of the practitioner seen with it (1).

FIGURE 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1.

New technologies in medicine have coincided with each phase of industrial revolution. First industrial revolution was mechanization, with mechanical loom invented in 1784. The stethoscope was invented by René Laennec in 1816 and improved by Arthur Leared (1851) and George Philip Cammann (1852). Second industrial revolution was driven by advent of electricity, with the commercial light bulb (patented by Thomas Edison in 1879), telegram, and modern factory production line. Electrocardiogram was invented by Augustus Waller in 1887 by projecting the heartbeat captured by Lippmann capillary electrometer onto photographic plate, allowing heartbeat to be recorded in real time. Willem Einthoven (1895) assigned letters P, Q, R, S, and T to the theoretic waveform. Third industrial revolution, known as digital revolution, brought computing technology and refined it to personal computer. In 1960s, Kuhl and Edwards developed cross-sectional CT and implemented this in the SPECT scanner, which was later applied to CT scanner by Sir Godfrey Hounsfield and Allan Cormack in 1972. Fourth industrial revolution is that of modern day, with big data, hyperconnectivity, and neural networks, resulting in ability to propel self-driving cars and development of AI in nuclear medicine. CNN = convolutional neural network; IoT = Internet of things.

Nuclear medicine has always embraced the progress of technology. With the emergence of AI, we will again be poised to experience a modern renaissance, similar to the one experienced after David Kuhl’s and Roy Edwards’ groundbreaking work in the 1960s. By applying the concepts of radon transform through newly available computing technology, they introduced volumetric cross-sectional medical imaging with SPECT, which was subsequently followed by the development of x-ray–based CT and PET (2).

The past decades have seen tremendous advances in information technology and in its integration into the practice of medicine. The application of artificial intelligence (AI) to medicine represents the actualization of a new era. Such transformative technologies can affect all facets of society, yielding advances in space exploration, defense, energy, industrial processes, and finance; and even in cartography, transportation, and food service, among others.

The addition of AI into clinical practice in nuclear medicine poses opportunities and challenges. The full benefits of this new technology will continuously evolve. It is important to recognize that the nuclear medicine community must be actively involved to ensure safe and effective implementation. Establishing and maintaining AI leadership in the realm of nuclear medicine requires a comprehensive strategy to promote the application of innovative technology while protecting our patients and society, executing our professional and ethical obligations, and promoting our values. A potential advantage of deploying AI techniques is that nuclear medicine methodologies may become more widely available, increasing the access of patients to high-quality nuclear medicine procedures.

Nuclear medicine professional societies such as the Society of Nuclear Medicine and Molecular Imaging (SNMMI) and others provide leadership to ensure that we recognize the benefits of technologic advances in a manner consistent with our core values, medical ethics, and society’s best interests. In July 2020, the SNMMI formed an AI task force by bringing together experts in nuclear medicine and AI, including physicists, computational imaging scientists, physicians, statisticians, and representatives from industry and regulatory agencies. This article serves as both a strategic plan and a summary of the deliberations of the SNMMI AI task force over the past year in conjunction with other focused topics, including best practices for development (3) and evaluation (4) (Table 1).

NOTEWORTHY

  • An appropriate AI ecosystem can contribute to enhancing the trustworthiness of AI tools throughout their life cycle through close collaboration among stakeholders.

  • A trustworthy medical AI system depends on the trustworthiness of the AI system itself, as well as the trustworthiness of all people and processes that are part of the system’s life cycle.

  • By encouraging the establishment of trustworthy AI in nuclear medicine, SNMMI aims to decrease health disparity, increase health system efficiency, and contribute to the improved overall health of society using AI applications in the practice of nuclear medicine.

View this table:
  • View inline
  • View popup
TABLE 1.

Opportunities and Challenges Ahead for Nuclear Medicine Toward Achieving Trustworthy AI

OPPORTUNITIES

Quantitative Imaging and Process Improvement

Nuclear medicine is evolving toward even better image quality and more accurate and precise quantification in the precision medicine era, most recently in the paradigm of theranostics.

Diagnostic Imaging

AI techniques in the patient-to-image subdomain improve acquisition, and models in the image-to-patient subdomain improve decision making for interventions on patients (Fig. 2) (3).

FIGURE 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2.

From patient to image creation and back to physician, there are opportunities for AI systems to act at nearly any step in medical imaging pipeline to improve our ability to care for patients and understand disease (3).

Image generation considerations are elaborated in the supplemental section “Opportunities,” part A (supplemental materials are available at http://jnm.snmjournals.org (5–40)); however, examples include improved image reconstruction from raw data (list-mode, sinogram); data corrections including for attenuation, scatter, and motion; and postreconstruction image enhancement, among others (41–43). These enhancements could impact PET and SPECT in clinical use today. Multiple–time-point acquisitions and PET/MRI may see improved feasibility.

Specific opportunities in image analysis are elaborated in the supplemental section “Opportunities,” part B. A few examples include image registration, organ and lesion segmentation, biomarker measurements and multiomics integration, and kinetic modeling (44).

Opportunities for clinical use of AI in nuclear medicine practice were extensively reviewed recently, including brain imaging (45), head and neck imaging (46), lung imaging (47), cardiac imaging (48,49), vascular imaging (49,50), bone imaging (51), prostate imaging (52), and imaging of lymphoma (53). Neuroendocrine tumors, other cancers (including gastrointestinal, pancreatic, hepatobiliary, sarcoma, and hereditary), infection, and inflammation are some examples of additional areas requiring further consideration.

Emerging Nuclear Imaging Approaches

New developments are also emerging such as total-body PET (54), which presents unique data and computational challenges. Another potential use of AI is to separate multichannel data from single-session multiisotope dynamic PET imaging. This pragmatic advancement could be valuable to extract greater phenotyping information in the evaluation of tumor heterogeneity (55).

Radiopharmaceutical Therapies (RPTs)

There are several areas in which AI is expected to significantly impact RPTs.

AI-Driven Theranostic Drug Discovery and Labeling

The use of AI for molecular discovery has been explored to select the most promising leads to design suitable theranostics for the target in question. For example, machine learning models could be trained using parameters from past theranostic successes and failures (e.g., partition coefficient, dissociation constant, and binding potential) to establish which best predict a given outcome (e.g., specific binding, blood–brain barrier penetration, and tumor-to-muscle ratio). New AI approaches are revolutionizing our understanding of protein–ligand interactions (56). New hit molecules (e.g., from the literature or high-throughput screens) can then serve as the test set in such AI models to speed up hit-to-lead optimization. Subsequently, with lead molecules identified, AI could also predict optimal labeling precursors and synthesis routes to facilitate fast and efficient development of theranostic agents (57,58). By defining parameters from existing synthetic datasets (e.g., solvents, additives, functional groups, and nuclear magnetic resonance shifts), models can be trained to predict radiochemical yield for a given substrate using different precursors and radiosynthetic methods. Subjecting new lead candidates as test sets in the models will enable rapid identification of appropriate precursors and labeling strategies for new theranostics, minimizing resource-intensive manual synthetic development.

Precision Dosimetry

The field of radiopharmaceutical dosimetry is progressing rapidly. After administration of radiopharmaceuticals, dynamic and complex pharmacokinetics results in time-variable biodistribution. Interaction of ionizing particles arising from the injected agent with the target and normal tissue results in energy deposition. Quantification of this deposited energy and its biologic effect is the essence of dosimetry, with opportunities to link the deposited energy to its biologic effect on diseased and normal tissues (Fig. 3).

FIGURE 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3.

Dosimetry as major frontier supported by AI toward personalization of therapy: various contributions by AI to image acquisition, generation, and processing, followed by automated dose calculations, can enable routine deployment and clinical decision support. TIAM = Time Integrated Activity Map.

In dosimetry, SPECT serves as a posttreatment quantitative measuring device. One challenge is the difficulty for patients to remain flat and motionless on the scanning table for the required time. AI-based image reconstruction or enhancement methods can reduce the required SPECT scanning time for patients while maintaining or enhancing the accuracy of quantification (59) and enable attenuation correction in SPECT (60).

Multiple steps in dosimetry potentially can be enhanced by AI methods, including multimodality and multiple–time-point image registration, segmentation of organs and tumors, time–activity curve fitting, time-integrated activity estimation, conversion of time-integrated activity into absorbed dose, linking macroscale dosimetry to microscale dosimetry, and arriving at comprehensive patient dose profiling (61).

Predictive Dosimetry and Digital Twins

Existing models can perform dosimetry before (e.g., 131I-metaiodobenzylguanidine) or after treatment. Personalized RPTs require predictive dosimetry for optimal dose prescription in which AI can play a role. Pretherapy (static or dynamic) PET scans could model radiopharmaceutical pharmacokinetics and absorbed doses in tumors and normal organs. Furthermore, it is possible to additionally use intratherapy scans (e.g., single–time-point SPECT in the first cycle of RPTs) to better anticipate and adjust doses in subsequent cycles.

Overall, a vision of the future involves accurate and rapid evaluation of different RPT approaches (e.g., varying the injected radioactivity dose and rate, site of injection, and injection interval and coupling with other therapies) using the concept of the theranostic digital twin. The theranostic digital twin can aid nuclear medicine physicians in complex decision-making processes. It enables experimentation (in the digital world) with different treatment scenarios, thus optimizing delivered therapies.

The opportunities discussed in the RPT section above are further described in the supplemental section “Opportunities,” part C.

Clinical Workflow: Increase Throughput While Maintaining Excellence

AI may impact operations in nuclear medicine, such as patient scheduling and resource use (62), predictive maintenance of devices to minimize unexpected downtimes, monitoring of quality control measurement results to discover hidden patterns and indicate potential for improvement, and monitoring of the performance of devices in real time to capture errors and detect aberrancies (62,63). These processes will make the practice of nuclear medicine safer, more reliable, and more valuable.

Triage of urgent findings and augmentation of time-consuming tasks could improve the report turnaround time for the most critical cases and increase the efficiency of nuclear medicine physicians, allowing them to more effectively care for patients. It is important to ensure that AI systems in nuclear medicine are sustainable through developing new current procedural terminology codes and assigning appropriate relative value units for the technical and professional components. It is also possible that increased efficiencies in interpretation (more cases read accurately per unit time) may allow AI to be deployed into clinical workflows in an overall cost-effective manner.

AI ECOSYSTEM

Actualization of Opportunities and Contextualization of Challenges

Although early nuclear medicine AI systems are already emerging, many opportunities remain in which the continuous propagation of AI technology could augment our precision patient care and practice efficiencies. The environment in which AI development, evaluation, implementation, and dissemination occurs needs a sustainable ecosystem to enable progress, while appropriately mitigating concerns of stakeholders.

The total life cycle of AI systems, from concept to appropriation of training data, model development and prototyping, production testing, validation and evaluation, implementation and deployment, and postdeployment surveillance, occurs within a framework that we call the AI ecosystem (Fig. 4). An appropriate AI ecosystem can contribute to enhancing the trustworthiness of AI tools throughout their life cycle through close collaboration among stakeholders.

FIGURE 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 4.

AI ecosystem is a complex environment in which AI system development occurs. The ecosystem connects stakeholders from industry to regulatory agencies, physicians, patients, health systems, and payers. Proposed SNMMI AI Center of Excellence can serve as an honest broker to empower the AI ecosystem from a neutral standpoint with focus on solutions. ACE = SNMMI AI Center of Excellence; RIS = radiology information system.

CHALLENGES FOR DEVELOPMENT, VALIDATION, DEPLOYMENT, AND IMPLEMENTATION

Development of AI Applications and Medical Devices

Five challenges that should be addressed include availability of curated data, optimization of network architecture, measurement and communication of uncertainty, identification of clinically impactful use cases, and improvements in team science approaches (supplemental section “Development Challenges”).

Evaluation (Verification of Performance)

Theories on appropriate evaluation of AI software are a broad and active area of current investigation. Establishing clear and consistent guidelines for performance profiling remains challenging. Most current verification studies evaluate AI methods on the basis of metrics that are agnostic to performance on clinical tasks (64). Although such evaluation may help demonstrate promise, there is an important need for further testing on specific clinical tasks before the algorithms can be implemented. Failure-mode profiling is among the most important challenges (supplemental section “Evaluation Challenges”).

Ethical, Regulatory, and Legal Ambiguities

Major ethical concerns include informed consent for data use, replication of historical bias and unfairness embedded in training data, unintended consequences of AI device agency, the inherent opaqueness of some algorithms, concerns about the impact of AI on health-care disparities, and trustworthiness (supplemental section “Ethical, Regulatory, and Legal Ambiguities”). AI in nuclear medicine has limited legal precedent (65).

Implementation of Clinical AI Solutions and Postdeployment Monitoring

The lack of an AI platform integrating AI applications in the nuclear medicine workflow is among the most critical challenges of implementation (66). Barriers of dissemination can be categorized at the individual level (health-care providers), at the institutional level (organization culture), and at the societal level (67). Deployment is not the end of the implementation process (supplemental section “Implementation of Clinical AI Solutions and Post-Deployment Monitoring”).

TRUST AND TRUSTWORTHINESS

In medicine, trust is the essence, not a pleasance.

Successful solutions to the above-mentioned challenges are necessary but not sufficient for the sustainability of AI ecosystems in medicine. Well-developed and validated AI devices with supportive regulatory context, appropriate reimbursement, and successful primary implementation may still fail if physicians, patients, and society lose trust because of lack of transparency and other critical elements of trustworthiness such as perceived inattention to health disparity or racial injustice. In a recent survey, Martinho et al. (68) found significant perceived mistrust among health-care providers with regard to AI systems and the AI industry while realizing the importance and benefits of this new technology. Responders also emphasized the importance of ethical use, and the need for physician-in-the-loop interactions with AI systems, among the other factors. There is a need for a comprehensive analysis of the AI ecosystem to define and clarify the core elements of trustworthiness in order to realize the benefits of AI in clinical practice.

RESPONSIBILITIES: TOWARD TRUSTWORTHY AI

When the safety, well-being, and rights of our patients are at stake, SNMMI should be committed to support principles that are future-proof and innovation-friendly.

The willingness of physicians and patients to depend on a specific tool in a risky situation is the measure of the trustworthiness of that tool (69). In the case of AI systems, that willingness is based on a set of specific beliefs about the reliability, predictability, and robustness of the tool, as well as the integrity, competency, and benevolence of the people or processes involved in the AI system’s life cycle (development, evaluation/validation, deployment/implementation, and use).

A trustworthy medical AI system depends on the trustworthiness of the AI system itself, as well as the trustworthiness of all people and processes that are part of the system’s life cycle (Fig. 5).

FIGURE 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 5.

Twelve core concepts critical to trustworthy AI ecosystems.

Trustworthy medical AI systems require a societal and professional commitment to the ethical AI framework, which includes 4 principles rooted in the fundamentals of medical ethics: respect for patients’ and physicians’ autonomy, prevention of harm, beneficence to maximize the well-being of patients and society, and fairness. These principles should be observed in various phases of the AI system life cycle.

In what follows, we outline 12 key elements that need to be consistently present in AI systems.

12 Key Elements of Trustworthy AI Systems

Human Agency

AI systems should empower physicians and patients, allowing them to make better-informed decisions and foster their autonomy (70). Effects of the AI algorithms on human independence should be considered. It should be clear to patients and physicians the extent to which AI is involved in patient care and the extent of physician oversight. There must be checks to avoid automation bias, which is the propensity of humans to value and overly rely on observations and analyses from computers over those of human beings (71).

Oversight

There must be sufficient oversight of AI decision making, which can be achieved through human-in-the-loop and human-in-command approaches (72). AI systems that are involved in higher-risk tasks (e.g., those that drive clinical management and diagnose or treat disease) must be closely monitored through postmarket surveillance by independent professional credentialing organizations analogous to certification and recertification of medical professionals. Peer review processes in practices can be adapted to consider the combined physician–AI decision-making process.

Technical Robustness

AI systems must perform in a dependable manner (sufficient accuracy, reliability, and reproducibility) (73). This performance should be resilient to the breadth of clinical circumstances related to their prescribed use (generalizability). The AI tool should explicitly convey a degree of certainty about its output (confidence score) and have a mechanism in place to monitor the accuracy of outputs as part of a continuous quality assurance program. Failure modes of the algorithm should be well-characterized, documented, and understood by users.

Safety and Accountability

According to the concepts of safety-critical systems (74), AI systems should prioritize safety above other design considerations (e.g., potential gains in efficiency, economics, or performance). When adverse events occur, mechanisms should be in place for ensuring accountability and redress. Vendors must be accountable for the claims made of their AI systems. Physicians must be accountable for the way in which AI systems are implemented and used in the care of patients. The ability to independently audit the root cause of a failure in an AI system is important. Protection must be provided for individuals or groups reporting legitimate concerns in accordance with the principles of risk management.

Security and Data Governance

AI systems must include mechanisms to minimize harm, as well as to prevent it whenever possible. They must comply with all required cybersecurity standards. There should be an assessment of vulnerabilities such as data poisoning, model evasion, and model inversion. Assurances should be made to mitigate potential vulnerabilities and avoid misuse, inappropriate use, or malicious use (such as a deep fake) (75).

Predetermined Change Control Plan

AI tools can be highly iterative and adaptive, which may lead to rapid continual product improvement. The plan should include types of anticipated modifications (software-as-a-medical-device prespecifications). There must be a clear and well-documented methodology (algorithm change protocol) to evaluate the robustness and safety of the updated AI system. The algorithm change protocol should include guidelines for data management, retraining, performance evaluation, and update procedures. Vendors should maintain a culture of quality and organizational excellence.

Diversity, Bias Awareness, Nondiscrimination, and Fairness

AI systems can be affected by input data maladies (incomplete data, inadvertent historically biased data), algorithm design insufficiencies, or suboptimal performance assessment or monitoring strategies. These issues may result in biases leading to unintended prejudice and cause harm to patients. Discriminatory bias should be removed from AI systems in the development phase when possible (67).

AI system performance should be evaluated in a wide spectrum of diseases and in patients with a particular condition regardless of extraneous personal characteristics. No particular group of patients should be systematically excluded from AI device development. Patients who are underrepresented or have rare diseases should not be excluded from AI system development or evaluation—though such datasets will be sparse and most likely could be used in the evaluation of AI methods developed only in larger populations (for generalizability). Appropriate validation testing on standardized sets that incorporate patient diversity, including rare or unusual presentations of disease, are critical to evaluate the presence of bias in results regardless of the training data used (76).

AI systems should be user-centric and developed with an awareness of the practical limitations of the physician work environment. Accessibility features should be provided to those individuals with disabilities to the extent necessary according to universal design principles.

Stakeholder Participation

Throughout the life cycle of an AI system, all stakeholders who may directly or indirectly be affected should actively participate to help, advise, and oversee the developers and industry. Participation of patients, physicians, and all relevant providers, health-care systems, payors, regulatory agencies, and professional societies is imperative. This inclusive and transparent engagement is essential for a trustworthy AI ecosystem. Regular clinical feedback is needed to establish longer-term mechanisms for active engagement.

Transparency and Explainability

Vendors should openly communicate how an AI system is validated for the labeled claim (purpose, criteria, and limitations) by describing the clinical task for which the algorithm was evaluated; the composition of the patient population used for validation; the image acquisition, reconstruction, and analysis protocols; and the figure of merit used for the evaluation (4,73). There must be appropriate training material and disclaimers for health-care professionals on how to adequately use the system. It should be clear which information is communicated from the AI system and which information is communicated by a health-care professional. AI systems should incorporate mechanisms to log and review which data, AI model, or rules were used to generate certain outputs (auditability and traceability). The effect of the input data on the AI system’s output should be conveyed in a manner whereby their relationship can be understood by physicians and, ideally, patients (explainability) in order to allow a mechanism to critically evaluate and contest the AI system outputs. For diagnostic applications, the AI system should communicate the degree of confidence (uncertainty) together with its decision. To the extent possible, in high-stakes tasks the use of black box AI systems without proper emphasis on transparency should be avoided (77).

Sustainability of Societal Well-Being

It is important to acknowledge that exposure to AI could negatively impact social relationships and attachment within the health-care system (social agency) (78). AI systems should be implemented in a manner that enhances the physician–patient relationship. AI systems should not interfere with human deliberation or deteriorate social interactions. The societal and environmental impact of an AI tool should be carefully considered to ensure sustainability. Health-care workers who are impacted by the implementation of AI systems should be given an opportunity to provide feedback and contribute to its implementation plan. Professional societies and training programs should take steps to ensure that AI systems do not result in deskilling of professionals, such as by providing opportunities for reskilling and upskilling. A new set of skills, including physician oversight and interaction with AI tools, will evolve and must be refined.

Privacy

AI systems should have appropriate processes in place to maintain the security and privacy of patient data. The amount of personal data used should be minimized (data minimization). There should be a statement on measures used to achieve privacy by design, such as encryption, pseudoanonymization, aggregation, and anonymization. Systems should be aligned with standards and protocols for data management and governance.

Fairness and Supportive Context of Implementation

Early development efforts can pose more risk to developers and consumers. To address liability concerns, there have been successful programs in other industries to encourage adoption of new technology and support consumer protection, such as for vaccines and autonomous vehicles (65).

STRATEGIES FOR SUCCESS

Part 1: SNMMI Initiatives

In July 2022, SNMMI created an AI task force to strategically assess the emergence of AI in nuclear medicine (supplemental section “SNMMI Initiatives”). An area of important focus was to designate working groups, such as the AI and dosimetry working group for predictive dosimetry and treatment planning.

Part 2: SNMMI Action Plan

The AI task force recommends the establishment of an SNMMI AI Center of Excellence to facilitate a sustainable AI ecosystem (supplemental section “SNMMI Action Plan”). A nuclear medicine imaging archive will address the need for meaningful data access. A coalition on trustworthy AI in medicine and society will address the need for an AI bill of rights (79).

Part 3: SNMMI Recommendations

Recommendations for the future are also provided in the supplemental section “SNMMI Recommendations.”

CONCLUSION

There are immense and exciting opportunities for AI to benefit the practice of nuclear medicine. Meanwhile, there are challenges that must and can be addressed head-on. As current challenges are addressed and new AI solutions emerge, SNMMI and the nuclear medicine community have the responsibility to ensure the trustworthiness of these tools in the care of patients.

We can all benefit from efforts to ensure fairness, inclusion, and lack of bias in the entire life cycle of AI algorithms in different settings.

There are 3 levels of facilitation that can support and enable the appropriate environment for trustworthy AI. First, our community must establish guidelines, such as those referenced in this article, to promote the natural development of trustworthy AI. Second, we can facilitate trustworthy AI through an SNMMI AI Center of Excellence. Third, we can make trustworthy AI occur through active engagement and communicative actions.

By encouraging the establishment of trustworthy AI in nuclear medicine, SNMMI aims to decrease health disparity, increase health system efficiency, and contribute to the improved overall health of society using AI applications in the practice of nuclear medicine.

DISCLOSURE

The views expressed in this article are those of the authors and do not necessarily reflect the views of the U.S. government, nor do they reflect any official recommendation or endorsement of the National Institutes of Health. Helena McMeekin is a part-time employee of Hermes Medical Solutions, Inc. Sven Zuehlsdorff is a full-time employee of Siemens Medical Solutions, Inc. No other potential conflict of interest relevant to this article was reported.

ACKNOWLEDGMENTS

We acknowledge Bonnie Clarke for her very valuable support of this project. We are grateful for the insightful comments of Dr. Roberto Maas-Moreno, particularly on the concept of theranostic digital twins (AI and RPT).

Footnotes

  • Published online Dec. 15, 2022.

  • © 2023 by the Society of Nuclear Medicine and Molecular Imaging.

Immediate Open Access: Creative Commons Attribution 4.0 International License (CC BY) allows users to share and adapt with attribution, excluding materials credited to previous publications. License: https://creativecommons.org/licenses/by/4.0/. Details: http://jnm.snmjournals.org/site/misc/permission.xhtml.

REFERENCES

  1. 1.↵
    1. Jiwa M,
    2. Millett S,
    3. Meng X,
    4. Hewitt VM
    . Impact of the presence of medical equipment in images on viewers’ perceptions of the trustworthiness of an individual on-screen. J Med Internet Res. 2012;14:e100.
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Dunnick NR,
    2. David E
    . Kuhl, MD. Radiology. 2017;285:1065.
    OpenUrl
  3. 3.↵
    1. Bradshaw TJ,
    2. Boellaard R,
    3. Dutta J,
    4. et al
    . Nuclear medicine and artificial intelligence: best practices for algorithm development. J Nucl Med. 2022;63:500–510.
    OpenUrlAbstract/FREE Full Text
  4. 4.↵
    1. Jha AK,
    2. Bradshaw TJ,
    3. Buvat I,
    4. et al
    . Nuclear medicine and artificial intelligence: best practices for evaluation (the RELAINCE guidelines). J Nucl Med. 2022;63:1288–1299.
    OpenUrlAbstract/FREE Full Text
  5. 5.↵
    1. Saboury B,
    2. Rahmim A,
    3. Siegel E
    . PET and AI trajectories finally coming into alignment. PET Clin. 2021;16:15–16.
    OpenUrl
  6. 6.
    1. Saboury B,
    2. Rahmim A,
    3. Siegel E
    . Taming the complexity: using artificial intelligence in a cross-disciplinary innovative platform to redefine molecular imaging and radiopharmaceutical therapy. PET Clin. 2022;17:17–19.
    OpenUrl
  7. 7.
    1. Reader AJ,
    2. Schramm G
    . Artificial intelligence for PET image reconstruction. J Nucl Med. 2021;62:1330–1333.
    OpenUrl
  8. 8.
    1. Shiri I,
    2. Ghafarian P,
    3. Geramifar P,
    4. et al
    . Direct attenuation correction of brain PET images using only emission data via a deep convolutional encoder-decoder (Deep-DAC). Eur Radiol. 2019;29:6867–6879.
    OpenUrl
  9. 9.
    1. Yu Z,
    2. Rahman MA,
    3. Schindler T,
    4. Laforest R,
    5. Jha AK
    . A physics and learning-based transmission-less attenuation compensation method for SPECT. Proc SPIE Int Soc Opt Eng. 2021:11595.
  10. 10.
    1. Shiri I,
    2. Arabi H,
    3. Geramifar P,
    4. et al
    . Deep-JASC: joint attenuation and scatter correction in whole-body 18F-FDG PET using a deep residual network. Eur J Nucl Med Mol Imaging. 2020;47:2533–2548.
    OpenUrl
  11. 11.
    1. Liu F,
    2. Jang H,
    3. Kijowski R,
    4. Zhao G,
    5. Bradshaw T,
    6. McMillan AB
    . A deep learning approach for 18F-FDG PET attenuation correction. EJNMMI Phys. 2018;5:24.
    OpenUrl
  12. 12.
    1. Van Hemmen H,
    2. Massa H,
    3. Hurley S,
    4. Cho S,
    5. Bradshaw T,
    6. McMillan A
    . A deep learning-based approach for direct whole-body PET attenuation correction [abstract]. J Nucl Med. 2019;60(suppl 1):569.
    OpenUrl
  13. 13.
    1. Rahman A,
    2. Zhu Y,
    3. Clarkson E,
    4. Kupinski MA,
    5. Frey EC,
    6. Jha AK
    . Fisher information analysis of list-mode SPECT emission data for joint estimation of activity and attenuation distribution. Inverse Probl. 2020;36:084002.
    OpenUrl
  14. 14.
    1. Qian H,
    2. Rui X,
    3. Ahn S
    . Deep learning models for PET scatter estimations. In: 2017 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC). IEEE; 2017:1–5.
  15. 15.
    1. Arabi H,
    2. Bortolin K,
    3. Ginovart N,
    4. Garibotto V,
    5. Zaidi H
    . Deep learning-guided joint attenuation and scatter correction in multitracer neuroimaging studies. Hum Brain Mapp. 2020;41:3667–3679.
    OpenUrl
  16. 16.
    1. Sanaat A,
    2. Arabi H,
    3. Mainta I,
    4. Garibotto V,
    5. Zaidi H
    . Projection space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image space. J Nucl Med. 2020;61:1388–1396.
    OpenUrlAbstract/FREE Full Text
  17. 17.
    1. Yu Z,
    2. Rahman MA,
    3. Schindler T,
    4. et al
    . AI-based methods for nuclear-medicine imaging: need for objective task-specific evaluation [abstract]. J Nucl Med. 2020;61(suppl 1):575.
    OpenUrl
  18. 18.
    1. Fu Y,
    2. Lei Y,
    3. Wang T,
    4. Curran WJ,
    5. Liu T,
    6. Yang X
    . Deep learning in medical image registration: a review. Phys Med Biol. 2020;65:20TR01.
    OpenUrl
  19. 19.
    1. Yousefirizi F
    . Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-based detection, classification and prediction/prognosis in medical imaging: towards radiophenomics. PET Clin. 2022;17:183–212.
    OpenUrl
  20. 20.
    1. Cui J,
    2. Gong K,
    3. Guo N,
    4. Kim K,
    5. Liu H
    . CT-guided PET parametric image reconstruction using deep neural network without prior training data. In: Proceedings of SPIE 10948, Medical Imaging 2019: Physics of Medical Imaging. SPIE; 2019:109480Z.
  21. 21.
    1. Xie N,
    2. Gong K,
    3. Guo N,
    4. et al
    .Clinically translatable direct Patlak reconstruction from dynamic PET with motion correction using convolutional neural network. In: Medical Image Computing and Computer Assisted Intervention: MICCAI 2020. Springer International Publishing; 2020:793–802.
  22. 22.
    1. Gong K,
    2. Catana C,
    3. Qi J,
    4. Li Q
    . Direct reconstruction of linear parametric images from dynamic PET using nonlocal deep image prior. IEEE Trans Med Imaging. 2022;41:680–689.
  23. 23.
    1. Jackson P,
    2. Hardcastle N,
    3. Dawe N,
    4. Kron T,
    5. Hofman MS,
    6. Hicks RJ
    . Deep learning renal segmentation for fully automated radiation dose estimation in unsealed source therapy. Front Oncol. 2018;8:215.
    OpenUrl
  24. 24.
    1. Akhavanallaf A,
    2. Shiri I,
    3. Arabi H,
    4. Zaidi H
    . Whole-body voxel-based internal dosimetry using deep learning. Eur J Nucl Med Mol Imaging. 2021;48:670–682.
    OpenUrl
  25. 25.
    1. Langlotz CP,
    2. Allen B,
    3. Erickson BJ,
    4. et al
    . A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/the Academy Workshop. Radiology. 2019;291:781–791.
    OpenUrlCrossRef
  26. 26.
    1. Morris MA,
    2. Saboury B,
    3. Burkett B,
    4. Gao J,
    5. Siegel EL
    . Reinventing radiology: big data and the future of medical imaging. J Thorac Imaging. 2018;33:4–16.
    OpenUrlCrossRefPubMed
  27. 27.
    1. Sitek A,
    2. Ahn S,
    3. Asma E,
    4. et al
    . Artificial intelligence in PET: an industry perspective. PET Clin. 2021;16:483–492.
    OpenUrl
  28. 28.
    1. Krizhevsky A,
    2. Sutskever I,
    3. Hinton GE
    . ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (NIPS 2012). MIT Press; 2012:1–9.
  29. 29.
    1. Ouyang D,
    2. He B,
    3. Ghorbani A,
    4. et al
    . Video-based AI for beat-to-beat assessment of cardiac function. Nature. 2020;580:252–256.
    OpenUrl
  30. 30.
    1. Huang S-C,
    2. Pareek A,
    3. Seyyedi S,
    4. Banerjee I,
    5. Lungren MP
    . Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digit Med. 2020;3:136.
    OpenUrl
  31. 31.
    1. Kaissis G,
    2. Ziller A,
    3. Passerat-Palmbach J,
    4. et al
    . End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nat Mach Intell. 2021;3:473–484.
    OpenUrl
  32. 32.
    1. Warnat-Herresthal S,
    2. Schultze H,
    3. Shastry KL,
    4. et al
    . Swarm Learning for decentralized and confidential clinical machine learning. Nature. 2021;594:265–270.
    OpenUrl
  33. 33.
    1. Begoli E,
    2. Bhattacharya T,
    3. Kusnezov D
    . The need for uncertainty quantification in machine-assisted medical decision making. Nat Mach Intell. 2019;1:20–23.
    OpenUrl
  34. 34.
    1. Poplin R,
    2. Varadarajan AV,
    3. Blumer K,
    4. et al
    . Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2:158–164.
    OpenUrl
  35. 35.
    1. Ghassemi M,
    2. Oakden-Rayner L,
    3. Beam AL
    . The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health. 2021;3:e745–e750.
    OpenUrlPubMed
  36. 36.
    1. Arun N,
    2. Gaw N,
    3. Singh P,
    4. et al
    . Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiol Artif Intell. 2021;3:e200267.
    OpenUrl
  37. 37.
    1. Obermeyer Z,
    2. Powers B,
    3. Vogeli C,
    4. Mullainathan S
    . Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366:447–453.
    OpenUrlAbstract/FREE Full Text
  38. 38.
    1. Murray E,
    2. Treweek S,
    3. Pope C,
    4. et al
    . Normalisation process theory: a framework for developing, evaluating and implementing complex interventions. BMC Med. 2010;8:63.
    OpenUrlCrossRefPubMed
  39. 39.
    1. Morris ZS,
    2. Wooding S,
    3. Grant J
    . The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104:510–520.
    OpenUrlCrossRefPubMed
  40. 40.↵
    1. May C
    . A rational model for assessing and evaluating complex interventions in health care. BMC Health Serv Res. 2006;6:86.
    OpenUrlCrossRefPubMed
  41. 41.↵
    1. Gong K,
    2. Kim K,
    3. Cui J,
    4. Wu D,
    5. Li Q
    . The evolution of image reconstruction in PET: from filtered back-projection to artificial intelligence. PET Clin. 2021;16:533–542.
    OpenUrl
  42. 42.
    1. McMillan AB,
    2. Bradshaw TJ
    . Artificial intelligence–based data corrections for attenuation and scatter in position emission tomography and single-photon emission computed tomography. PET Clin. 2021;16:543–552.
    OpenUrl
  43. 43.↵
    1. Liu J,
    2. Malekzadeh M,
    3. Mirian N,
    4. Song T-A,
    5. Liu C,
    6. Dutta J
    . Artificial intelligence-based image enhancement in PET imaging: noise reduction and resolution enhancement. PET Clin. 2021;16:553–576.
    OpenUrl
  44. 44.↵
    1. Yousefirizi F,
    2. Jha AK,
    3. Brosch-Lenz J,
    4. Saboury B,
    5. Rahmim A
    . Toward high-throughput artificial intelligence-based segmentation in oncological PET imaging. PET Clin. 2021;16:577–596.
    OpenUrl
  45. 45.↵
    1. Cross DJ,
    2. Komori S,
    3. Minoshima S
    . Artificial intelligence for brain molecular imaging. PET Clin. 2022;17:57–64.
    OpenUrl
  46. 46.↵
    1. Gharavi SMH,
    2. Faghihimehr A
    . Clinical application of artificial intelligence in PET imaging of head and neck cancer. PET Clin. 2022;17:65–76.
    OpenUrl
  47. 47.↵
    1. Zukotynski KA,
    2. Gaudet VC,
    3. Uribe CF,
    4. Chiam K,
    5. Bénard F,
    6. Gerbaudo VH
    . Clinical applications of artificial intelligence in positron emission tomography of lung cancer. PET Clin. 2022;17:77–84.
    OpenUrl
  48. 48.↵
    1. Miller RJH,
    2. Singh A,
    3. Dey D,
    4. Slomka P
    . Artificial intelligence and cardiac PET/computed tomography imaging. PET Clin. 2022;17:85–94.
    OpenUrl
  49. 49.↵
    1. Slart RHJA,
    2. Williams MC,
    3. Juarez-Orozco LE,
    4. et al
    . Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging. 2021;48:1399–1413.
    OpenUrl
  50. 50.↵
    1. Paravastu SS,
    2. Theng EH,
    3. Morris MA,
    4. et al
    . Artificial intelligence in vascular-PET: translational and clinical applications. PET Clin. 2022;17:95–113.
    OpenUrl
  51. 51.↵
    1. Paravastu SS,
    2. Hasani N,
    3. Farhadi F,
    4. et al
    . Applications of artificial intelligence in 18F-sodium fluoride positron emission tomography/computed tomography: current state and future directions. PET Clin. 2022;17:115–135.
    OpenUrl
  52. 52.↵
    1. Ma K,
    2. Harmon SA,
    3. Klyuzhin IS,
    4. Rahmim A,
    5. Turkbey B
    . Clinical application of artificial intelligence in positron emission tomography: imaging of prostate cancer. PET Clin. 2022;17:137–143.
    OpenUrl
  53. 53.↵
    1. Hasani N,
    2. Paravastu SS,
    3. Farhadi F,
    4. et al
    . Artificial intelligence in lymphoma PET imaging: a scoping review (current trends and future directions). PET Clin. 2022;17:145–174.
    OpenUrl
  54. 54.↵
    1. Wang Y,
    2. Li E,
    3. Cherry SR,
    4. Wang G
    . Total-body PET kinetic modeling and potential opportunities using deep learning. PET Clin. 2021;16:613–625.
    OpenUrl
  55. 55.↵
    1. Ding W,
    2. Yu J,
    3. Zheng C,
    4. et al
    . Machine learning-based noninvasive quantification of single-imaging session dual-tracer 18F-FDG and 68Ga-DOTATATE dynamic PET-CT in oncology. IEEE Trans Med Imaging. 2022;41:347–359.
    OpenUrl
  56. 56.↵
    1. Tunyasuvunakool K,
    2. Adler J,
    3. Wu Z,
    4. et al
    . Highly accurate protein structure prediction for the human proteome. Nature. 2021;596:590–596.
    OpenUrlPubMed
  57. 57.↵
    1. Webb EW,
    2. Scott PJH
    . Potential applications of artificial intelligence and machine learning in radiochemistry and radiochemical engineering. PET Clin. 2021;16:525–532.
    OpenUrl
  58. 58.↵
    1. Ataeinia B,
    2. Heidari P
    . Artificial intelligence and the future of diagnostic and therapeutic radiopharmaceutical development: in silico smart molecular design. PET Clin. 2021;16:513–523.
    OpenUrl
  59. 59.↵
    1. Arabi H
    . AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med. 2021;83:122–137.
    OpenUrl
  60. 60.↵
    1. Shi L,
    2. Onofrey JA,
    3. Liu H,
    4. Liu Y-H,
    5. Liu C
    . Deep learning-based attenuation map generation for myocardial perfusion SPECT. Eur J Nucl Med Mol Imaging. 2020;47:2383–2395.
    OpenUrl
  61. 61.↵
    1. Brosch-Lenz J,
    2. Yousefirizi F,
    3. Zukotynski K,
    4. et al
    . Role of artificial intelligence in theranostics: toward routine personalized radiopharmaceutical therapies. PET Clin. 2021;16:627–641.
    OpenUrl
  62. 62.↵
    1. Beegle C,
    2. Hasani N,
    3. Maass-Moreno R,
    4. Saboury B,
    5. Siegel E
    . Artificial intelligence and positron emission tomography imaging workflow. PET Clin. 2022;17:31–39.
    OpenUrl
  63. 63.↵
    1. Ullah MN,
    2. Levin CS
    . Application of artificial intelligence in PET instrumentation. PET Clin. 2022;17:175–182.
    OpenUrl
  64. 64.↵
    1. Yang J,
    2. Sohn JH,
    3. Behr SC,
    4. Gullberg GT,
    5. Seo Y
    . CT-less direct correction of attenuation and scatter in the image space using deep learning for whole-body FDG PET: potential benefits and pitfalls. Radiol Artif Intell. 2020;3:e200137.
    OpenUrl
  65. 65.↵
    1. Mezrich JL
    . Demystifying medico-legal challenges of artificial intelligence applications in molecular imaging and therapy. PET Clin. 2022;17:41–49.
    OpenUrl
  66. 66.↵
    1. Saboury B,
    2. Morris M,
    3. Siegel E
    . Future directions in artificial intelligence. Radiol Clin North Am. 2021;59:1085–1095.
    OpenUrl
  67. 67.↵
    1. Yousefi Nooraie R,
    2. Lyons PG,
    3. Baumann AA,
    4. Saboury B
    . Equitable implementation of artificial intelligence in medical imaging: what can be learned from implementation science? PET Clin. 2021;16:643–653.
    OpenUrl
  68. 68.↵
    1. Martinho A,
    2. Kroesen M,
    3. Chorus C
    . A healthy debate: exploring the views of medical doctors on the ethics of artificial intelligence. Artif Intell Med. 2021;121:102190.
    OpenUrl
  69. 69.↵
    1. Hasani N,
    2. Morris MA,
    3. Rhamim A,
    4. et al
    . Trustworthy artificial intelligence in medical imaging. PET Clin. 2022;17:1–12.
    OpenUrl
  70. 70.↵
    1. Kilbride MK,
    2. Joffe S
    . The new age of patient autonomy: implications for the patient-physician relationship. JAMA. 2018;320:1973–1974.
    OpenUrlPubMed
  71. 71.↵
    1. Lyell D,
    2. Coiera E
    . Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc. 2017;24:423–431.
    OpenUrlPubMed
  72. 72.↵
    1. Vinuesa R,
    2. Azizpour H,
    3. Leite I,
    4. et al
    . The role of artificial intelligence in achieving the sustainable development goals. Nat Commun. 2020;11:233.
    OpenUrl
  73. 73.↵
    1. Jha AK,
    2. Myers KJ,
    3. Obuchowski NA,
    4. et al
    . Objective task-based evaluation of artificial intelligence-based medical imaging methods: framework, strategies, and role of the physician. PET Clin. 2021;16:493–511.
    OpenUrl
  74. 74.↵
    1. Grant ES
    . Requirements engineering for safety critical systems: an approach for avionic systems. In: 2016 2nd IEEE International Conference on Computer and Communications (ICCC). IEEE; 2016:991–995.
  75. 75.↵
    1. Zhou Q,
    2. Zuley M,
    3. Guo Y,
    4. et al
    . A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun. 2021;12:7281.
    OpenUrl
  76. 76.↵
    1. Hasani N,
    2. Farhadi F,
    3. Morris MA,
    4. et al
    . Artificial intelligence in medical imaging and its impact on the rare disease community: threats, challenges and opportunities. PET Clin. 2022;17:13–29.
    OpenUrl
  77. 77.↵
    1. Rudin C
    . Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:206–215.
    OpenUrl
  78. 78.↵
    1. Harvey DL
    . Agency and community: a critical realist paradigm. J Theory Soc Behav. 2002;32:163–194.
    OpenUrlCrossRef
  79. 79.↵
    Science and Technology Policy Office. Notice of request for information (RFI) on public and private sector uses of biometric technologies. Fed Regist. 2021;86:56300–56302.
    OpenUrl
  • Received for publication February 13, 2022.
  • Revision received December 6, 2022.
View Abstract
PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine: 64 (2)
Journal of Nuclear Medicine
Vol. 64, Issue 2
February 1, 2023
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Complete Issue (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem
Babak Saboury, Tyler Bradshaw, Ronald Boellaard, Irène Buvat, Joyita Dutta, Mathieu Hatt, Abhinav K. Jha, Quanzheng Li, Chi Liu, Helena McMeekin, Michael A. Morris, Peter J.H. Scott, Eliot Siegel, John J. Sunderland, Neeta Pandit-Taskar, Richard L. Wahl, Sven Zuehlsdorff, Arman Rahmim
Journal of Nuclear Medicine Feb 2023, 64 (2) 188-196; DOI: 10.2967/jnumed.121.263703

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem
Babak Saboury, Tyler Bradshaw, Ronald Boellaard, Irène Buvat, Joyita Dutta, Mathieu Hatt, Abhinav K. Jha, Quanzheng Li, Chi Liu, Helena McMeekin, Michael A. Morris, Peter J.H. Scott, Eliot Siegel, John J. Sunderland, Neeta Pandit-Taskar, Richard L. Wahl, Sven Zuehlsdorff, Arman Rahmim
Journal of Nuclear Medicine Feb 2023, 64 (2) 188-196; DOI: 10.2967/jnumed.121.263703
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • OPPORTUNITIES
    • AI ECOSYSTEM
    • CHALLENGES FOR DEVELOPMENT, VALIDATION, DEPLOYMENT, AND IMPLEMENTATION
    • TRUST AND TRUSTWORTHINESS
    • RESPONSIBILITIES: TOWARD TRUSTWORTHY AI
    • STRATEGIES FOR SUCCESS
    • CONCLUSION
    • DISCLOSURE
    • ACKNOWLEDGMENTS
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • Artificial Intelligence-Enhanced Perfusion Scoring Improves the Diagnostic Accuracy of Myocardial Perfusion Imaging
  • Green Nuclear Medicine and Radiotheranostics
  • Is the Clinical Implementation of In-House Artificial Intelligence-Developed Algorithms Happening?
  • Navigating the Future of Prostate Cancer Care: AI-Driven Imaging and Theranostics Through the Lens of RELAINCE
  • Ethical Considerations for Artificial Intelligence in Medical Imaging: Deployment and Governance
  • Google Scholar

More in this TOC Section

  • A Vision for Gastrin-Releasing Peptide Receptor Targeting for Imaging and Therapy: Perspective from Academia and Industry
  • Treatment Landscape of Prostate Cancer in the Era of PSMA Radiopharmaceutical Therapy
  • Theranostics for Neuroblastoma: Making Molecular Radiotherapy Work Better
Show more The State of the Art

Similar Articles

Keywords

  • artificial intelligence
  • trustworthy
  • nuclear medicine
  • ecosystem
SNMMI

© 2025 SNMMI

Powered by HighWire