Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing performances of french orthopaedic surgery residents with the artificial intelligence ChatGPT-4/4o in the French diploma exams of orthopaedic and trauma surgery
10
Zitationen
5
Autoren
2024
Jahr
Abstract
INTRODUCTION: This study evaluates the performance of ChatGPT, particularly its versions 4 and 4o, in answering questions from the French orthopedic and trauma surgery exam (Diplôme d'Études Spécialisées, DES), compared to the results of French orthopedic surgery residents. Previous research has examined ChatGPT's capabilities across various medical specialties and exams, with mixed results, especially in the interpretation of complex radiological images. HYPOTHESIS: ChatGPT version 4o was capable of achieving a score equal to or higher (not lower) than that of residents for the DES exam. METHODS: The response capabilities of the ChatGPT model, versions 4 and 4o, were evaluated and compared to the results of residents for 250 questions taken from the DES exams from 2020 to 2024. A secondary analysis focused on the differences in the AI's performance based on the type of data being analyzed (text or images) and the topic of the questions. RESULTS: The score achieved by ChatGPT-4o was equivalent to that of residents over the past five years: 74.8% for ChatGPT-4o vs. 70.8% for residents (p = 0.32). The accuracy rate of ChatGPT was significantly higher in its latest version 4o compared to version 4 (58.8%, p = 0.0001). Secondary subgroup analysis revealed a performance deficiency of the AI in analyzing graphical images (success rates of 48% and 65% for ChatGPT-4 and 4o, respectively). ChatGPT-4o showed superior performance to version 4 when the topics involved the spine, pediatrics, and lower limb. CONCLUSION: The performance of ChatGPT-4o is equivalent to that of French students in answering questions from the DES in orthopedic and trauma surgery. Significant progress has been observed between versions 4 and 4o. The analysis of questions involving iconography remains a notable challenge for the current versions of ChatGPT, with a tendency for the AI to perform less effectively compared to questions requiring only text analysis. LEVEL OF EVIDENCE: IV; Retrospective Observational Study.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.