OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.05.2026, 04:28

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator

2019·194 Zitationen·Journal of the American Medical Informatics AssociationOpen Access
Volltext beim Verlag öffnen

194

Zitationen

6

Autoren

2019

Jahr

Abstract

OBJECTIVE: Implementation of machine learning (ML) may be limited by patients' right to "meaningful information about the logic involved" when ML influences healthcare decisions. Given the complexity of healthcare decisions, it is likely that ML outputs will need to be understood and trusted by physicians, and then explained to patients. We therefore investigated the association between physician understanding of ML outputs, their ability to explain these to patients, and their willingness to trust the ML outputs, using various ML explainability methods. MATERIALS AND METHODS: We designed a survey for physicians with a diagnostic dilemma that could be resolved by an ML risk calculator. Physicians were asked to rate their understanding, explainability, and trust in response to 3 different ML outputs. One ML output had no explanation of its logic (the control) and 2 ML outputs used different model-agnostic explainability methods. The relationships among understanding, explainability, and trust were assessed using Cochran-Mantel-Haenszel tests of association. RESULTS: The survey was sent to 1315 physicians, and 170 (13%) provided completed surveys. There were significant associations between physician understanding and explainability (P < .001), between physician understanding and trust (P < .001), and between explainability and trust (P < .001). ML outputs that used model-agnostic explainability methods were preferred by 88% of physicians when compared with the control condition; however, no particular ML explainability method had a greater influence on intended physician behavior. CONCLUSIONS: Physician understanding, explainability, and trust in ML risk calculators are related. Physicians preferred ML outputs accompanied by model-agnostic explanations but the explainability method did not alter intended physician behavior.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen