Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reply to: False conflict and false confirmation errors are crucial components of AI accuracy in medical decision making
4
Zitationen
3
Autoren
2024
Jahr
Abstract
Subtable A describes the theoretical concept in which cases can fall into one of 8 scenarios: 4 errors (i-iv) and 4 correct predictions (a-d).Subtable B-D correspond to the publicly available data from Chanda and colleagues 1 .Here B refers to the whole set of participating clinicians, whereas C refers to the best performing clinicians and D to the worst performing clinicians.For Subtable B-D.*In this table, bold numbers mark numbers with correct diagnoses after taking AI advice into account.*In this table, underlined numbers denote numbers with incorrect diagnoses after taking AI advice into account.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.