Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinician perspectives on explainability in AI-driven closed-loop neurotechnology
4
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial Intelligence (AI) holds promise for advancing the field of neurotechnology and accelerating its clinical translation. AI-driven clinical neurotechnologies leverage the power of non-linear algorithms to analyze complex brain data and enable adaptive, closed-loop neurostimulation. Despite these promises, the integration of AI into clinical practice remains limited, with lack of explainability being commonly cited as one main obstacle. This raises the question of whether opacity and lack of explainability also hinder the adoption of AI in closed-loop medical neurotechnologies. We investigated the attitudes, informational needs and preferences of clinicians regarding AI-driven closed-loop neurotechnologies and explored what forms of explanation they consider necessary for clinical use. We conducted semi-structured expert interviews with twenty clinicians (including neurologists, neurosurgeons, and psychiatrists) from Germany and Switzerland. Using reflexive thematic analysis, we explored their understanding of and expectations for explainability in the context of AI-driven closed-loop neurotechnology systems. Clinicians consistently emphasized the importance of context-sensitive, clinically meaningful forms of explainability such as understanding what input data were used to train the system and how the output relates to clinically relevant outcomes. By contrast, detailed knowledge of the model's inner architecture or technical mechanics were of limited interest. Several participants specifically called for Explainable AI (XAI) techniques, particularly feature importance and relevance measures, to support their interpretation of system outputs. Our findings suggest that the clinical utility of AI-driven neurotechnologies can be improved by focusing on intuitive, user-centered and clinically meaningful forms of explainability rather than full algorithmic transparency. Designing systems that meet these pragmatic needs may help bridge the translational gap between AI development and clinical implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.