OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 20:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Benchmarking large language models for medical education: performance on the clinical laboratory technician qualification examination

2026·0 Zitationen·Frontiers in MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

Large language models (LLMs) have shown growing applications in medicine, yet their capabilities in the field of clinical laboratory technology remain underexplored. This study aims to evaluate the performance of LLMs in the Chinese Clinical Laboratory Technologist Qualification Examination (CCLTQE) and provide empirical evidence for their application in laboratory medicine. A dataset containing 1,600 single-choice questions is constructed for the CCLTQE exam. The dataset covers four sections: clinical laboratory fundamentals, other medical knowledge related to clinical laboratory technology, clinical laboratory specialized knowledge, and clinical laboratory professional practice competence. We select 12 LLMs for evaluation, including the DeepSeek, GPT, Llama, Qwen, and Gemma series. Results show that Qwen3-235B achieves the highest overall accuracy (89.93%), followed by DeepSeek-R1 (89.75%) and QwQ-32B (89.22%). This study demonstrates that LLMs optimized for Chinese language and domain-specific content demonstrate outstanding performance in CCLTQE, indicating significant potential for AI-assisted education and practice in laboratory medicine.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareText Readability and Simplification
Volltext beim Verlag öffnen