OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 16:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Grounded report generation for enhancing ophthalmic ultrasound interpretation using Vision-Language Segmentation models

2026·0 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

Accurate interpretation of ophthalmic ultrasound is crucial for diagnosing eye conditions but remains time-consuming and requires significant expertise. With the increasing volume of ultrasound data, there is a need for Artificial Intelligence (AI) systems capable of efficiently analyzing images and generating reports. Traditional AI models for report generation cannot simultaneously identify lesions and lack interpretability. This study proposes the Vision-Language Segmentation (VLS) model, combining Vision-Language Model (VLM) with the Segment Anything Model (SAM) to improve interpretability in ophthalmic ultrasound imaging. Using data from three hospitals, totaling 64,098 images and 21,355 reports, the VLS model achieved a BLEU4 score of 66.37 in internal test set, and 85.36 and 73.77 in external test sets. The model achieved a mean dice coefficient of 59.6% in internal test set, and dice coefficients of 50.2% and 51.5% with specificity values of 97.8% and 97.7% in external test sets, respectively. Overall diagnostic accuracy was 90.59% in internal and 71.87% in external test sets. A cost-effectiveness analysis demonstrated a 30-fold reduction in report costs, from $39 per report by senior ophthalmologists to $1.3 for VLS. This approach enhances diagnostic accuracy, reduces manual effort, and accelerates workflows, offering a promising solution for ophthalmic ultrasound interpretation.

Ähnliche Arbeiten