Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI
49
Zitationen
1
Autoren
2021
Jahr
Abstract
The sudden rise in the ability of machine learning methodology, such as deep neural networks, to identify and predict with great accuracy instances of malignant cell growth from radiological images has led prominent developers of this technology, such as Geoffrey Hinton, to hold the view that "[…] we should stop training radiologists." Similar views exist in other contexts regarding the replacement of humans with artificial intelligence (AI) technologies. The assumption in these kinds of views is that deep neural networks are better than human radiologists in that they are more accurate, less costly, and have more predictive power than their human counterparts. In this paper, I argue that these considerations, even if true, are simply inadequate as reasons for us to allocate the kind of trust suggested by Hinton and others to these sorts of artifacts. In particular, I show that if the same considerations were true of something other than an AI device, say a pigeon, we would not have sufficient reason to trust them in the same way as suggested of deep neural networks in a medical setting. If this is the case then these considerations are also insufficient to trust AI enough to replace radiologists. Furthermore, I argue that the reliability of AI methodologies such as deep neural networks-which are at the center of this argument-is something that has not yet been established, and doing so faces fundamental challenges. Because of these challenges, it is not possible to ascribe the level of reliability expected from the deployment of a medical device. So, not only are the reasons cited in favor of the deployment of AI technologies in medical settings not sufficient/adequate even if they are true, but knowing whether they are true or not faces non-trivial epistemic challenges. If this is so, then we have no good reasons to advocate replacing radiologists with AI methodologies such as deep neural networks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.