PT - JOURNAL ARTICLE AU - Rogasch, Julian M.M. AU - Metzger, Giulia AU - Preisler, Martina AU - Galler, Markus AU - Thiele, Felix AU - Brenner, Winfried AU - Feldhaus, Felix AU - Wetz, Christoph AU - Amthauer, Holger AU - Furth, Christian AU - Schatka, Imke TI - ChatGPT: Can You Prepare My Patients for [<sup>18</sup>F]FDG PET/CT and Explain My Reports? AID - 10.2967/jnumed.123.266114 DP - 2023 Sep 14 TA - Journal of Nuclear Medicine PG - jnumed.123.266114 4099 - http://jnm.snmjournals.org/content/early/2023/09/14/jnumed.123.266114.short 4100 - http://jnm.snmjournals.org/content/early/2023/09/14/jnumed.123.266114.full AB - We evaluated whether the artificial intelligence chatbot ChatGPT can adequately answer patient questions related to [18F]FDG PET/CT in common clinical indications before and after scanning. Methods: Thirteen questions regarding [18F]FDG PET/CT were submitted to ChatGPT. ChatGPT was also asked to explain 6 PET/CT reports (lung cancer, Hodgkin lymphoma) and answer 6 follow-up questions (e.g., on tumor stage or recommended treatment). To be rated “useful” or “appropriate,” a response had to be adequate by the standards of the nuclear medicine staff. Inconsistency was assessed by regenerating responses. Results: Responses were rated “appropriate” for 92% of 25 tasks and “useful” for 96%. Considerable inconsistencies were found between regenerated responses for 16% of tasks. Responses to 83% of sensitive questions (e.g., staging/treatment options) were rated “empathetic.” Conclusion: ChatGPT might adequately substitute for advice given to patients by nuclear medicine staff in the investigated settings. Improving the consistency of ChatGPT would further increase reliability.