OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.05.2026, 00:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Real-World Validation of Arkangel AI, a Conversational Agent for Real-Time Evidence-Based Medical Question Answering: Randomized Controlled Trial (Preprint)

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

<sec> <title>BACKGROUND</title> The volume of biomedical evidence makes it difficult for physicians to access up-to-date information quickly during routine practice. Large language models (LLMs) have shown promise for clinical support, but most evaluations use multiple-choice or simulated settings and do not assess real-world use by practicing clinicians. Evidence-traceable tools that combine LLMs with real-time retrieval of curated sources could support clinical question-answering; external validation of such tools in routine practice is lacking. </sec> <sec> <title>OBJECTIVE</title> To evaluate the effect of Arkangel AI use on response time and the validity of physicians’ answers to open-ended clinical questions, compared with traditional search methods without artificial intelligence support. </sec> <sec> <title>METHODS</title> Physicians were randomly assigned to two study groups—Group A (Arkangel AI–assisted search) and Group B (traditional search methods). A total of 202 physicians initiated the study, and 71 completed all responses and were included in the final analysis. Each participant solved four clinical cases, each comprising four open-ended questions. Responses were evaluated by clinical specialists blinded to group assignment using six predefined validity criteria. The association between Arkangel AI use and response validity was assessed using multivariable logistic regression, adjusting for academic and sociodemographic characteristics. </sec> <sec> <title>RESULTS</title> Physicians who used Arkangel AI had higher validity scores than those using traditional search. For total validity, the median was 2.83 (IQR 2.52–3.00) in Group A and 2.46 (IQR 2.21–2.67) in Group B (median difference 0.38; 95% CI 0.17–0.54; Mann-Whitney U test, P&lt;.001). The effect size was large (Cliff delta 0.59; 95% CI 0.34–0.80), with a 79% superiority probability for Group A. In the multivariable model, the association between Arkangel AI use and higher response validity showed a positive trend (adjusted OR 2.42; 95% CI 0.82–7.16) but did not reach statistical significance (P=.11). Response times were comparable between groups, with no significant difference in time per question or number of searches. </sec> <sec> <title>CONCLUSIONS</title> LLM-assisted clinical search with Arkangel AI was associated with higher response validity and comparable response times in this sample of practicing physicians. The findings support the potential role of evidence-based conversational agents as decision-support tools in medical education and clinical practice and justify further studies with larger samples. </sec> <sec> <title>CLINICALTRIAL</title> N/A </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareDigital Mental Health Interventions
Volltext beim Verlag öffnen