OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 01:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ACFormer: A Multimodal Attention and Contrastive Learning Framework for Chest Disease Risk Prediction

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

With the development of medical artificial intelligence, the application of multimodal data fusion in disease prediction has received increasing attention. However, most existing disease prediction methods rely on single-modality data?such as medical imaging or clinical text?which makes it difficult to fully exploit cross-modal associations, thereby limiting prediction accuracy. To address this limitation, we construct a chest disease risk prediction model that integrates medical imaging and clinical text. The model adopts a dual-tower architecture to independently encode image and text features and employs contrastive learning to optimize cross-modal semantic alignment. The overall framework consists of a medical image encoder, a clinical data encoder, a multimodal contrastive learning module, and a fusion-based prediction module. By combining intra-modal self-attention and cross-modal attention through bidirectional attention interaction, the model enhances semantic consistency between image and text, thereby improving classification accuracy. The proposed method demonstrates outstanding performance in chest disease prediction, particularly in detecting subtle lesions and assessing multi-label disease risks. This study offers new insights into the application of medical AI for clinical decision support.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

COVID-19 diagnosis using AIMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen