TY - JOUR
T1 - Explainable AI in nuclear medicine
AU - Holm, Sune
AU - Ferrara, Daria
AU - Pepponi, Miriam
AU - Abenavoli, Elisabetta
AU - Frille, Armin
AU - Duke, Shaul
AU - Grünert, Stefan
AU - Hacker, Marcus
AU - Hennig, Bengt
AU - Hesse, Swen
AU - Hofmann, Lukas
AU - Lund, Thomas B.
AU - Sabri, Osama
AU - Sandøe, Peter
AU - Sciagrà, Roberto
AU - Sundar, Lalith Kumar Shiyam
AU - Yu, Josef
AU - Beyer, Thomas
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/11/25
Y1 - 2025/11/25
N2 - Purpose: In this short communication, we consider the need for explainable AI from the perspective of a large multi-disciplinary research project for predicting cachexia in cancer patients. Materials and methods: In a series of meetings, comprising expertise from medicine, data science, sociology, and philosophy, project participants discussed the need for explainability. Results: We distinguish between contexts in which a black box AI tool undertakes tasks that users can perform or validate themselves and contexts in which this is not the case. Conclusion: We conclude that explanations are likely required when a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur. More generally, close collaboration among physicians, AI developers, and other stakeholders is crucial to ensure that AI tools are trustworthy and useful in clinical practice.
AB - Purpose: In this short communication, we consider the need for explainable AI from the perspective of a large multi-disciplinary research project for predicting cachexia in cancer patients. Materials and methods: In a series of meetings, comprising expertise from medicine, data science, sociology, and philosophy, project participants discussed the need for explainability. Results: We distinguish between contexts in which a black box AI tool undertakes tasks that users can perform or validate themselves and contexts in which this is not the case. Conclusion: We conclude that explanations are likely required when a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur. More generally, close collaboration among physicians, AI developers, and other stakeholders is crucial to ensure that AI tools are trustworthy and useful in clinical practice.
KW - Cachexia
KW - Clinical decision-making
KW - Explainable AI
KW - Lung cancer
KW - Medical imaging
KW - Trustworthy AI
U2 - 10.1007/s00259-025-07675-4
DO - 10.1007/s00259-025-07675-4
M3 - Journal article
C2 - 41288691
AN - SCOPUS:105022914402
SN - 1619-7070
JO - European Journal of Nuclear Medicine and Molecular Imaging
JF - European Journal of Nuclear Medicine and Molecular Imaging
ER -