OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 06:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

EYE-Llama, an in-domain large language model for ophthalmology

2025·8 Zitationen·iScienceOpen Access
Volltext beim Verlag öffnen

8

Zitationen

9

Autoren

2025

Jahr

Abstract

Training large language models (LLMs) on domain-specific data enhances their performance, yielding more accurate and reliable question-answering (Q&A) systems that support clinical decision-making and patient education. We present EYE-Llama, pretrained on ophthalmology-focused datasets, including PubMed abstracts, textbooks, and online articles, and fine-tuned on diverse Q&A pairs. We evaluated EYE-Llama against Llama 2, Llama 3, Meditron, ChatDoctor, ChatGPT, and several other LLMs. Using BERT (Bidirectional Encoder Representations from Transformers) score, BART (Bidirectional and Auto-Regressive Transformer) score, and BLEU (Bilingual Evaluation Understudy) metrics, EYE-Llama achieved superior scores. On the MedMCQA benchmark, it outperformed Llama 2, Meditron, and ChatDoctor. On PubMedQA, it achieved 0.96 accuracy, surpassing all models tested. These results demonstrate that domain-specific pretraining and fine-tuning significantly improve medical Q&A performance and underscore the value of specialized models such as EYE-Llama.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareBiomedical Text Mining and Ontologies
Volltext beim Verlag öffnen