OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 18:11

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Clinician perspectives on explainability in AI-driven closed-loop neurotechnology

2025·4 Zitationen·Scientific ReportsOpen Access
Volltext beim Verlag öffnen

4

Zitationen

3

Autoren

2025

Jahr

Abstract

Artificial Intelligence (AI) holds promise for advancing the field of neurotechnology and accelerating its clinical translation. AI-driven clinical neurotechnologies leverage the power of non-linear algorithms to analyze complex brain data and enable adaptive, closed-loop neurostimulation. Despite these promises, the integration of AI into clinical practice remains limited, with lack of explainability being commonly cited as one main obstacle. This raises the question of whether opacity and lack of explainability also hinder the adoption of AI in closed-loop medical neurotechnologies. We investigated the attitudes, informational needs and preferences of clinicians regarding AI-driven closed-loop neurotechnologies and explored what forms of explanation they consider necessary for clinical use. We conducted semi-structured expert interviews with twenty clinicians (including neurologists, neurosurgeons, and psychiatrists) from Germany and Switzerland. Using reflexive thematic analysis, we explored their understanding of and expectations for explainability in the context of AI-driven closed-loop neurotechnology systems. Clinicians consistently emphasized the importance of context-sensitive, clinically meaningful forms of explainability such as understanding what input data were used to train the system and how the output relates to clinically relevant outcomes. By contrast, detailed knowledge of the model's inner architecture or technical mechanics were of limited interest. Several participants specifically called for Explainable AI (XAI) techniques, particularly feature importance and relevance measures, to support their interpretation of system outputs. Our findings suggest that the clinical utility of AI-driven neurotechnologies can be improved by focusing on intuitive, user-centered and clinically meaningful forms of explainability rather than full algorithmic transparency. Designing systems that meet these pragmatic needs may help bridge the translational gap between AI development and clinical implementation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEEG and Brain-Computer InterfacesFunctional Brain Connectivity Studies
Volltext beim Verlag öffnen