Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Which explanations do clinicians prefer? A comparative evaluation of XAI understandability and actionability in predicting the need for hospitalization
7
Zitationen
20
Autoren
2025
Jahr
Abstract
BACKGROUND: This study aims to address the gap in understanding clinicians' attitudes toward explainable AI (XAI) methods applied to machine learning models using tabular data, commonly found in clinical settings. It specifically explores clinicians' perceptions of different XAI methods from the ALFABETO project, which predicts COVID-19 patient hospitalization based on clinical, laboratory, and chest X-ray at time of presentation to the Emergency Department. The focus is on two cognitive dimensions: understandability and actionability of the explanations provided by explainable-by-design and post-hoc methods. METHODS: A questionnaire-based experiment was conducted with 10 clinicians from the IRCCS Policlinico San Matteo Foundation in Pavia, Italy. Each clinician evaluated 10 real-world cases, rating predictions and explanations from three XAI tools: Bayesian networks, SHapley Additive exPlanations (SHAP), and AraucanaXAI. Two cognitive statements for each method were rated on a Likert scale, as well as the agreement with the prediction. Two clinicians answered the survey during think-aloud interviews. RESULTS: Clinicians demonstrated generally positive attitudes toward AI, but high compliance rates (86% on average) indicate a risk of automation bias. Understandability and actionability are positively correlated, with SHAP being the preferred method due to its simplicity. However, the perception of methods varies according to specialty and expertise. CONCLUSIONS: The findings suggest that SHAP and AraucanaXAI are promising candidates for improving the use of XAI in clinical decision support systems (DSSs), highlighting the importance of clinicians' expertise, specialty, and setting on the selection and development of supportive XAI advice. Finally, the study provides valuable insights into the design of future XAI DSSs.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.050 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.381 Zit.
"Why Should I Trust You?"
2016 · 14.789 Zit.
Generative adversarial networks
2020 · 13.381 Zit.
Autoren
- Laura Bergomi
- Giovanna Nicora
- Marta Anna Orlowska
- Chiara Podrecca
- Riccardo Bellazzi
- Caterina Fregosi
- Francesco Salinaro
- Marco Bonzano
- Giuseppe Crescenzi
- Francesco Speciale
- S. Di Pietro
- Valentina Zuccaro
- Erika Asperges
- Paolo Sacchi
- Pietro Valsecchi
- Elisabetta Pagani
- Michele Catalano
- Chandra Bortolotto
- Lorenzo Preda
- Enea Parimbelli