Should artificial intelligence have lower acceptable error rates than humans?

Anders Lenskjold*, Janus Uhd Nybing, Charlotte Trampedach, Astrid Galsgaard, Mathias Willadsen Brejnebøl, Henriette Raaschou, Martin Høyer Rose, Mikael Boesen

*Corresponding author af dette arbejde

Publikation: Bidrag til tidsskriftTidsskriftartikelpeer review

2 Downloads (Pure)

Abstract

The first patient was misclassified in the diagnostic conclusion according to a local clinical expert opinion in a new clinical implementation of a knee osteoarthritis artificial intelligence (AI) algorithm at Bispebjerg-Frederiksberg University Hospital, Copenhagen, Denmark. In preparation for the evaluation of the AI algorithm, the implementation team collaborated with internal and external partners to plan workflows, and the algorithm was externally validated. After the misclassification, the team was left wondering: what is an acceptable error rate for a low-risk AI diagnostic algorithm? A survey among employees at the Department of Radiology showed significantly lower acceptable error rates for AI (6.8 %) than humans (11.3 %). A general mistrust of AI could cause the discrepancy in acceptable errors. AI may have the disadvantage of limited social capital and likeability compared to human co-workers, and therefore, less potential for forgiveness. Future AI development and implementation require further investigation of the fear of AI’s unknown errors to enhance the trustworthiness of perceiving AI as a co-worker. Benchmark tools, transparency, and explainability are also needed to evaluate AI algorithms in clinical implementations to ensure acceptable performance.
OriginalsprogEngelsk
Artikelnummer20220053
TidsskriftBJR open
Vol/bind5
Udgave nummer1
Antal sider3
ISSN2513-9878
DOI
StatusUdgivet - 2023

Bibliografisk note

© 2023 The Authors. Published by the British Institute of Radiology.

Citationsformater