OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 16:33

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Performance evaluation of generative pre-trained transformer on the National Veterinary Licensing Examination in Japan

2026·0 Zitationen·Scientific ReportsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

12

Autoren

2026

Jahr

Abstract

Generative Pre-trained Transformer (GPT) models, which are large language models based on the transformer architecture, have enabled natural-language interaction with humans. GPT models have demonstrated high scores on National Medical Licensing Examination in various countries with translation. However, their performance on the National Veterinary Licensing Examination (NVLE) in Japan has not yet been explored. In this study, we evaluated GPT-4o, o1, and o3 on the 74th (2023) NVLE in Japan to compare the models, prompt designs (normal vs. optimized), and languages (Japanese vs. English). We then validated the best performance on the 75th (2024) and 76th (2025) NVLE using o3 with Japanese input and the normal prompt. As a result, o3 with Japanese input and the Normal prompt achieved the highest performance on the 74th NVLE, and both o1 and o3 outperformed GPT-4o. Furthermore, the validation tests using the 75th and 76th NVLE showed that o3 exceeded the minimum passing scoring rate in all sections, achieving an overall score of 92.9%. These findings indicate that recent GPT models can reliably answer the Japanese NVLE without requiring translation or elaborate prompt engineering, highlighting their potential as supportive tools in veterinary education and knowledge assistance in Japan.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareAI in Service Interactions
Volltext beim Verlag öffnen