OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 04:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Multi-AI Agent Framework for Interactive Neurosurgical Education and Evaluation: From Vignettes to Virtual Conversations

2026·0 Zitationen·Neurosurgery OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

21

Autoren

2026

Jahr

Abstract

BACKGROUND AND OBJECTIVES: Traditional medical board examinations present clinical information in static vignettes with multiple-choices (MC), fundamentally different from how physicians gather and integrate data in practice. Recent advances in large language models (LLMs) offer promising approaches to creating more realistic clinical interactive conversations. However, these approaches are limited in neurosurgery, where patient communication capacity varies significantly and diagnosis heavily relies on objective data such as imaging and neurological examinations. We aimed to develop and evaluate a multi–artificial intelligence (AI) agent conversation framework for neurosurgical case assessment that enables realistic clinical interactions through simulated patients and structured access to objective clinical data. METHODS: We developed a framework to convert 608 Self-Assessment in Neurological Surgery first-order diagnosis questions into conversation sessions using 3 specialized AI agents: patient AI for subjective information, system AI for objective data, and clinical AI for diagnostic reasoning. We evaluated generative pretrained transformer 4o's (GPT-4o's) diagnostic accuracy across traditional vignettes, patient-only conversations, and patient + system AI interactions, with human benchmark testing from 10 neurosurgery residents. RESULTS: GPT-4o showed significant performance drops from traditional vignettes to conversational formats in both MC (89.0%-60.9%, P < .0001) and free-response scenarios (78.4%-30.3%, P < .0001). Adding access to objective data through system AI improved performance (to 67.4%, P = .0015; and 61.8%, P < .0001, respectively). Questions requiring image interpretation showed similar patterns but lower accuracy. Residents outperformed GPT-4o in free-response conversations (70.0% vs 28.3%, P = .0030) using fewer interactions and reported high educational value of the interactive format. CONCLUSION: This multi-AI agent framework provides both a more challenging evaluation method for LLMs and an engaging educational tool for neurosurgical training. The significant performance drops in conversational formats suggest that traditional MC testing may overestimate LLMs' clinical reasoning capabilities, while the framework's interactive nature offers promising applications for enhancing medical education.

Ähnliche Arbeiten