Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Research ArticleContinuing Education

Large Language Models and Large Multimodal Models in Medical Imaging: A Primer for Physicians

Tyler J. Bradshaw, Xin Tie, Joshua Warner, Junjie Hu, Quanzheng Li and Xiang Li
Journal of Nuclear Medicine February 2025, 66 (2) 173-182; DOI: https://doi.org/10.2967/jnumed.124.268072
Tyler J. Bradshaw
1Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xin Tie
1Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua Warner
1Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Junjie Hu
2Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, Wisconsin; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Quanzheng Li
3Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xiang Li
3Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Article Figures & Data

Figures

  • FIGURE 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 1.

    Timeline of when different NLP and language modeling algorithms and techniques were introduced, together with definitions and examples.

  • FIGURE 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 2.

    Tokenization breaks text into more fundamental units called tokens. Each token is then represented by embedding vector. Embeddings for tokens with similar meanings tend to group together in vector space (illustrated here in 3-dimensional space for simplicity).

  • FIGURE 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 3.

    Attention module is core building block of transformer networks. In self-attention, embeddings for single token of input sequence (e.g., “eyes”) are updated by first computing attention weights using query-key comparisons and then using attention weights to do weighted sum of value vectors. Resulting update vector is then added to original embeddings.

  • FIGURE 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 4.

    Transformers can be organized as encoder-only, decoder-only, and encoder–decoder networks, depending on prediction task. MLP = multilayer perceptron.

  • FIGURE 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 5.

    Pipeline for creating chatbot LLM.

  • FIGURE 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 6.

    There are 3 primary techniques for integrating language and images in multimodal models: contrastive learning, late-fusion (i.e., cross-attention), and early fusion.

  • FIGURE 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    FIGURE 7.

    Different applications of LLMs and LMMs in radiology and health care.

PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine: 66 (2)
Journal of Nuclear Medicine
Vol. 66, Issue 2
February 1, 2025
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Complete Issue (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Large Language Models and Large Multimodal Models in Medical Imaging: A Primer for Physicians
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Large Language Models and Large Multimodal Models in Medical Imaging: A Primer for Physicians
Tyler J. Bradshaw, Xin Tie, Joshua Warner, Junjie Hu, Quanzheng Li, Xiang Li
Journal of Nuclear Medicine Feb 2025, 66 (2) 173-182; DOI: 10.2967/jnumed.124.268072

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Large Language Models and Large Multimodal Models in Medical Imaging: A Primer for Physicians
Tyler J. Bradshaw, Xin Tie, Joshua Warner, Junjie Hu, Quanzheng Li, Xiang Li
Journal of Nuclear Medicine Feb 2025, 66 (2) 173-182; DOI: 10.2967/jnumed.124.268072
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • BRIEF HISTORY OF NLP
    • COMPONENTS OF LLMS
    • DEVELOPMENT OF LLMS
    • USING LLMS
    • APPLICATIONS OF LLMS IN MEDICAL IMAGING
    • LMMS
    • APPLICATIONS OF LMMS IN MEDICAL IMAGING
    • FUTURE OUTLOOK
    • CONCLUSION
    • ACKNOWLEDGMENT
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • Approaches to Imaging Immune Activation Using PET
  • Precision Oncology in Melanoma: Changing Practices
Show more Continuing Education

Similar Articles

Keywords

  • computer/PACS
  • statistics
  • artificial intelligence
  • educational
  • large language models
  • machine learning
SNMMI

© 2025 SNMMI

Powered by HighWire