PT - JOURNAL ARTICLE AU - Kevin Tobia AU - Aileen Nielsen AU - Alexander Stremitzer TI - When Does Physician Use of AI Increase Liability? AID - 10.2967/jnumed.120.256032 DP - 2021 Jan 01 TA - Journal of Nuclear Medicine PG - 17--21 VI - 62 IP - 1 4099 - http://jnm.snmjournals.org/content/62/1/17.short 4100 - http://jnm.snmjournals.org/content/62/1/17.full SO - J Nucl Med2021 Jan 01; 62 AB - An increasing number of automated and artificial intelligence (AI) systems make medical treatment recommendations, including personalized recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI.  However, such liability depends in part on lay judgments by jurors: when physicians use AI systems, in which circumstances would jurors hold physicians liable? Methods: To determine potential jurors’ judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read 1 of 4 scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician’s decision (to accept or reject that recommendation). Subsequently, the physician’s decision caused harm. Participants then assessed the physician’s liability. Results: Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else being equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. Conclusion: The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools.