Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Survey on Medical Competence Evaluation Benchmarks for Large Language Models
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Large language models (LLMs) show considerable potential to revolutionize healthcare through their performance across diverse clinical applications. Given the inherent constraints of LLMs and the critical nature of medical practice, a rigorous and systematic evaluation of their medical competence is imperative. This study presents a comprehensive review of the established methodologies and benchmarks for evaluating the medical competence of LLMs, encompassing a thorough analysis of current assessment practices across medical knowledge, clinical practice competence, and ethical-safety considerations. By integrating clinician competency assessment frameworks into LLMs evaluation, we propose a structured tri-dimensional framework that systematically organizes existing evaluation approaches according to medical theoretical knowledge, clinical practice ability, and ethical-safety considerations. Furthermore, this research provides critical insights into future developmental trajectories while establishing foundational frameworks and standardization protocols for the integration of LLMs into medical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.