OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 13:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Performance of GPT-4o and DeepSeek-R1 in the Polish Infectious Diseases Specialty Exam

2025·9 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

9

Zitationen

14

Autoren

2025

Jahr

Abstract

Background The past few years have been a time of rapid development in artificial intelligence (AI) and its implementation across numerous fields. This study aimed to compare the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and DeepSeek-R1 (DeepSeek AI, Zhejiang, China) on the Polish specialty examination in infectious diseases. Materials and methods The study was conducted from April 1 to April 4, 2025, using the Autumn 2024 Polish specialty examination in infectious diseases. The examination comprised 120 questions, each presenting five answer options, with only one correct choice. The Center for Medical Education (CEM) in Łódź, Poland decided to withdraw one question due to the absence of a definitive correct answer and inconsistency with up-to-date clinical guidelines. Furthermore, the questions were classified as either 'clinical cases' or 'other' to enable a more in-depth evaluation of the potential of artificial intelligence in real-world clinical practice. The accuracy of the responses was verified using the official answer key approved by the CEM. To assess the accuracy and confidence level of the responses provided by GPT-4o and DeepSeek-R1, statistical methods were employed, including Pearson's χ<sup>2</sup> test, and Mann-Whitney U test. Results GPT-4o correctly answered 85 out of 199 questions (71.43%) while DeepSeek-R1 answered correctly 88 out of 199 questions (73.85%). A minimum of 72 (60.5%) correct responses is required to pass the examination. No statistically significant difference was observed between responses to 'clinical case' questions and 'other' questions for either AI model. For both AI models, a statistically significant difference was observed in the confidence levels between correct and incorrect answers, with higher confidence reported for correctly answered questions and lower confidence for incorrectly answered ones. Conclusions Both GPT-4o and DeepSeek-R1 demonstrated the ability to pass the Polish specialty examination in infectious diseases, suggesting their potential as educational tools. Additionally, it is noteworthy that DeepSeek-R1 achieved a performance comparable to GPT-4o, despite being a much newer model on the market and, according to available data, having been developed at significantly lower cost.

Ähnliche Arbeiten