Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large language models for error detection in radiology reports: a comparative analysis between closed-source and privacy-compliant open-source models
22
Zitationen
13
Autoren
2025
Jahr
Abstract
Question Can privacy-compliant open-source large language models (LLMs) match the error-detection performance of commercial non-privacy-compliant closed-source models in radiology reports? Findings Closed-source LLMs achieved slightly higher accuracy in detecting radiology report errors than open-source models, with Llama 3-70b yielding the best results among the open-source models. Clinical relevance Open-source LLMs offer a privacy-compliant alternative for automated error detection in radiology reports, improving clinical workflow efficiency while ensuring patient data confidentiality. Further refinement could enhance their accuracy, contributing to better diagnosis and patient care.
Ähnliche Arbeiten
Refinement and reassessment of the SERVQUAL scale.
1991 · 3.966 Zit.
Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review
2005 · 3.775 Zit.
Radiobiology for the Radiologist.
1974 · 3.501 Zit.
International evidence-based recommendations for point-of-care lung ultrasound
2012 · 2.815 Zit.
Radiation Dose Associated With Common Computed Tomography Examinations and the Associated Lifetime Attributable Risk of Cancer
2009 · 2.431 Zit.