Evaluating gradient-based explanation methods for neural network ECG analysis using heatmaps

Andrea Marheim Storås, Steffen Mæland, Jonas L. Isaksen, Steven Alexander Hicks*, Vajira Thambawita, Claus Graff, Hugo Lewi Hammer, Pål Halvorsen, Michael Alexander Riegler, Jørgen K. Kanters

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Objective: Evaluate popular explanation methods using heatmap visualizations to explain the predictions of deep neural networks for electrocardiogram (ECG) analysis and provide recommendations for selection of explanations methods. Materials and Methods: A residual deep neural network was trained on ECGs to predict intervals and amplitudes. Nine commonly used explanation methods (Saliency, Deconvolution, Guided backpropagation, Gradient SHAP, SmoothGrad, Input × gradient, DeepLIFT, Integrated gradients, GradCAM) were qualitatively evaluated by medical experts and objectively evaluated using a perturbation-based method. Results: No single explanation method consistently outperformed the other methods, but some methods were clearly inferior. We found considerable disagreement between the human expert evaluation and the objective evaluation by perturbation. Discussion: The best explanation method depended on the ECG measure. To ensure that future explanations of deep neural networks for medical data analyses are useful to medical experts, data scientists developing new explanation methods should collaborate tightly with domain experts. Because there is no explanation method that performs best in all use cases, several methods should be applied. Conclusion: Several explanation methods should be used to determine the most suitable approach.

Original languageEnglish
JournalJournal of the American Medical Informatics Association
Volume32
Issue number1
Pages (from-to)79-88
Number of pages10
ISSN1067-5027
DOIs
Publication statusPublished - 2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved.

Keywords

  • explainable artificial intelligence
  • machine learning

Cite this