OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 17:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

MMAN: Multi-Task and Multi-Scale Attention Network for Concurrently Lower Limbs Segmentation and Landmark Detection

2023·2 Zitationen
Volltext beim Verlag öffnen

2

Zitationen

7

Autoren

2023

Jahr

Abstract

Accurate bone segmentation and anatomical landmark detection are vital tasks for the clinical evaluation and treatment planning for patients with lower limbs X-ray films. To leverage the information between the two tasks and deal with the large-scale images, we propose an efficient end-to-end deep network, i.e., multi-task and multi-scale attention network (MMAN), to concurently segment lower limb bones and localize landmarks from large-scale X-ray films in one stage. The results demonstrate that our MMAN outperforms the other state-of-the-art methods for multi-task learning or single-task landmark detection using two separate stages. Our MMAN method has two main technical contributions. First, the local and global encoders are designed to capture multi-scale inputs and provide shared representations including local image details and global contexts, respectively. Second, a global-local attention module is designed to efficiently leverage the global context and learn task-specific information from shared representations under limited computational costs.

Ähnliche Arbeiten