Explainable AI in nuclear medicine

Sune Holm*, Daria Ferrara, Miriam Pepponi, Elisabetta Abenavoli, Armin Frille, Shaul Duke, Stefan Grünert, Marcus Hacker, Bengt Hennig, Swen Hesse, Lukas Hofmann, Thomas B. Lund, Osama Sabri, Peter Sandøe, Roberto Sciagrà, Lalith Kumar Shiyam Sundar, Josef Yu, Thomas Beyer

*Corresponding author for this work

Research output: Contribution to journalJournal articleCommunication

Abstract

Purpose:
In this short communication, we consider the need for explainable AI from the perspective of a large multi-disciplinary research project for predicting cachexia in cancer patients.
Materials and methods:
In a series of meetings, comprising expertise from medicine, data science, sociology, and philosophy, project participants discussed the need for explainability.
Results:
We distinguish between contexts in which a black box AI tool undertakes tasks that users can perform or validate themselves and contexts in which this is not the case.
Conclusion:
We conclude that explanations are likely required when a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur. More generally, close collaboration among physicians, AI developers, and other stakeholders is crucial to ensure that AI tools are trustworthy and useful in clinical practice.

Original languageEnglish
JournalEuropean Journal of Nuclear Medicine and Molecular Imaging
Number of pages4
ISSN1619-7070
DOIs
Publication statusE-pub ahead of print - 25 Nov 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2025.

Keywords

  • Cachexia
  • Clinical decision-making
  • Explainable AI
  • Lung cancer
  • Medical imaging
  • Trustworthy AI

Cite this