Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Should AI models be explainable to clinicians?
114
Zitationen
5
Autoren
2024
Jahr
Abstract
In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.408 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.253 Zit.
"Why Should I Trust You?"
2016 · 14.286 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.132 Zit.
Autoren
Institutionen
- Inserm(FR)
- Université Paris-Saclay(FR)
- Centre Hospitalier Universitaire de Grenoble(FR)
- Assistance Publique – Hôpitaux de Paris(FR)
- Hôpital Marie Lannelongue(FR)
- Hôpital Paris Saint-Joseph(FR)
- Université Gustave Eiffel(FR)
- Bicêtre Hospital(FR)
- Emory University(US)
- Université de Versailles Saint-Quentin-en-Yvelines(FR)
- Données et algorithmes pour une ville intelligente et durable(FR)