OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 12:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Clinically aligned multi-modal image-text model for pan-cancer prognosis prediction

2025·0 Zitationen·BioData MiningOpen Access
Volltext beim Verlag öffnen

0

Zitationen

13

Autoren

2025

Jahr

Abstract

Cancer prognosis prediction increasingly leverages multi-modal learning, yet existing approaches often rely on omics data that are costly and difficult to collect in routine practice. Pathology reports, by contrast, are routinely generated and widely available, providing a practical textual modality complementary to histology. We present CALM (Clinical Anchor text guided Learning for Multi-modal prognosis prediction), a general framework that integrates pathology images and reports through risk-specific anchor texts. CALM systematically incorporates prior clinical knowledge by guiding image and text alignment with large language model–derived anchors, further refined via few-shot tuning. Across 14 TCGA cancer types, CALM improved prognostic accuracy compared to image and text baselines (up to +11.5% mean C-index), with enhanced training stability. CALM achieved performance comparable to image–omics integration in PORPOISE (0.652 vs. 0.644), highlighting the prognostic value of text. External validation in an independent head and neck cancer cohort demonstrated that CALM improved generalization in zero-shot settings. Attention-based interpretation further confirmed that CALM aligns diagnostic text with tumor regions. Together, CALM offers a simple, interpretable, and clinically viable strategy for prognosis prediction, expanding the role of routine pathology reports as scalable priors in multi-modal oncology AI.

Ähnliche Arbeiten