Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating AI Versus Examiner Feedback in Ophthalmology Exit Examinations: A Pilot Study
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Objectives This pilot study aims to compare the quality of feedback provided by generative artificial intelligence (AI) and an official Royal College Examiner for simulated clinical and communication scenarios designed to prepare candidates for the Royal College of Ophthalmologists Part 2 Oral Examination. Design Utilising GPT 3.5 and 4 (OpenAI, San Francisco, CA, USA), an interactive web-based platform has been created that is able to simulate both patient and examiner roles in oral examination scenarios and simultaneously provide feedback on a candidate's performance. Feedback was provided solely using GPT-4 in combination with prompt techniques. A standardised patient was used to enact five clinical and communication scenarios that were each assessed by both the AI and a Royal College of Ophthalmologists Examiner. The transcripts from these sessions were thematically analysed using NVivo software (Lumivero, Burlington, MA, USA) to compare the quality and content of the feedback from both sources. Main outcome measures To determine the similarities and differences in the content and structure of feedback provided by AI (Examiner A) and a Royal College Examiner (Examiner B) in the context of preparing candidates for the Fellowship of the Royal College of Ophthalmologists (FRCOphth) Part 2 Oral Examination. Results The results reveal that while both Examiner A and Examiner B provide feedback on similar themes, such as empathy, communication clarity and systematic clinical reasoning, their approaches differ. Examiner A's feedback was more structured and often referenced specific frameworks, offering detailed, protocol-driven guidance. In contrast, Examiner B's feedback was more practical and context-specific, focusing on real-world applications and providing nuanced insights shaped by experiential knowledge. Conclusion The findings suggest that generative AI has the potential to complement traditional oral exam preparation methods by providing easily accessible, structured and scalable feedback which could provide an early foundation of learning for candidates. It may be particularly useful for those unfamiliar with the specific requirements of Royal College examinations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.