Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reshaping medical education: Performance of ChatGPT on a PES medical examination
47
Zitationen
6
Autoren
2023
Jahr
Abstract
BACKGROUND: We are currently experiencing a third digital revolution driven by artificial intelligence (AI), and the emergence of new chat generative pre-trained transformer (ChatGPT) represents a significant technological advancement with profound implications for global society, especially in the field of education. METHODS: The aim of this study was to see how well ChatGPT performed on medical school exams and to highlight how it might change medical education and practice. Recently, OpenAI's ChatGPT (OpenAI, San Francisco; GPT-4 May 24 Version) was put to the test against a significant Polish medical specialization licensing exam (PES), and the results are in. The version of ChatGPT-4 used in this study was the most up-to-date model at the time of publication (GPT-4). ChatGPT answered questions from June 28, 2023, to June 30, 2023. RESULTS: ChatGPT demonstrates notable advancements in natural language processing models on the tasks of medical question answering. In June 2023, the performance of ChatGPT was assessed based on its ability to answer a set of 120 questions, where it achieved a correct response rate of 67.1%, accurately responding to 80 questions. CONCLUSIONS: ChatGPT may be used as an assistance tool in medical education. While ChatGPT can serve as a valuable tool in medical education, it cannot fully replace human expertise and knowledge due to its inherent limitations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.693 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.598 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.124 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.871 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.