Reproducibility of Training Deep Learning Models for Medical Image Analysis

Joeran Sander Bosma*, Dré Peeters, Natália Alves, Anindo Saha, Zaigham Saghir, Colin Jacobs, Henkjan Huisman

*Corresponding author for this work

Research output: Contribution to journalConference articleResearchpeer-review

2 Citations (Scopus)
1 Downloads (Pure)

Abstract

Performance of deep learning algorithms varies due to their development data and training method, but also due to several stochastic processes during training. Due to these random factors, a single training run may not accurately reflect the performance of a given training method. Statistical comparisons in literature between different deep learning training methods typically ignore this performance variation between training runs and incorrectly claim significance of changes in training method. We hypothesize that the impact of such performance variation is substantial, such that it may invalidate biomedical competition leaderboards and some scientific papers. To test this, we investigate the reproducibility of training deep learning algorithms for medical image analysis. We repeated training runs from prior scientific studies: three diagnostic tasks (pancreatic cancer detection in CT, clinically significant prostate cancer detection in MRI, and lung nodule malignancy risk estimation in low-dose CT) and two organ segmentation tasks (pancreas segmentation in CT and prostate segmentation in MRI). A previously published top-performing algorithm for each task was trained multiple times to determine the variance in model performance. For all three diagnostic algorithms, performance variation from retraining was significant compared to data variance. Statistically comparing independently trained algorithms from the same training method using the same dataset should follow the null hypothesis, but we observed claimed significance with a p-value below 0.05 in 15% of comparisons with conventional testing (paired bootstrapping). We conclude that variance in model performance due to retraining is substantial and should be accounted for.

Original languageEnglish
JournalProceedings of Machine Learning Research
Volume227
Pages (from-to)1269-1287
Number of pages19
ISSN2640-3498
Publication statusPublished - 2023
Externally publishedYes
Event6th International Conference on Medical Imaging with Deep Learning, MIDL 2023 - Nashville, United States
Duration: 10 Jul 202312 Jul 2023

Conference

Conference6th International Conference on Medical Imaging with Deep Learning, MIDL 2023
Country/TerritoryUnited States
CityNashville
Period10/07/202312/07/2023

Bibliographical note

Publisher Copyright:
© 2023 CC-BY 4.0, J.S. Bosma, D. Peeters, N. Alves, A. Saha, Z. Saghir, C. Jacobs & H. Huisman. Intelligence (AI) algorithms, specifically medical image analysis algorithms, is becoming increasingly important. Implementation of such algorithms requires a good understanding of performance on unseen cases. International competitions, so-called (grand) challenges, provide a way to assess and compare the performance of multiple algorithms on identical data sets, providing bias-free evaluation. In challenges, the goal is to identify the best methodology to address a given task, but the difference in performance between top submissions is typically small.

Keywords

  • Deep learning
  • medical image analysis
  • performance variance
  • reproducibility

Cite this