OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 14:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Swin Transformer–based intelligent fracture classification and radiographic assessment to support clinical decision-making

2026·0 Zitationen·BMC Medical Informatics and Decision MakingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

Accurate assessment of fracture healing is essential for optimizing clinical decision-making in orthopedics, yet traditional radiographic evaluation often suffers from subjectivity and limited predictive value. The present study assesses the applicability of deep learning models for automated fracture-type classification on X-ray images and explores their potential contribution to imaging-based evaluation of fracture healing. The goal was to develop an efficient and accurate classification framework capable of supporting clinical diagnosis. A total of 1,175 X-ray images covering various fracture types (transverse, oblique, spiral) and anatomical locations (humerus, radius, femur, tibia, phalanges) were obtained from public datasets (FracAtlas and Roboflow Universe). Images were preprocessed to a uniform resolution of 256 × 256 pixels. Three models, Residual Network (ResNet), Vision Transformer (ViT), and Swin-Transformer, were trained using an 8:1:1 training-validation-test split. Model performance was evaluated through accuracy, parameter size, computational complexity measured by FLOPs, and convergence characteristics. Gradient-Weighted Class Activation Mapping (Grad-CAM) was used to visualize decision regions. Conditional GAN–based augmentation was applied to improve minority-class representation, and its effects were analyzed using both an independent test set and a stratified five-fold paired evaluation. The Swin Transformer achieved the best performance in automated fracture classification, with a test accuracy of 91.27%, a parameter size of 29 MB, and a computational complexity of 4.5 GFLOPs. Compared with the other models, it demonstrated faster convergence and lower loss values. ResNet and ViT achieved accuracies of 87.53% and 89.74%, respectively, with relatively higher complexity and computational costs. Grad-CAM visualization further showed that the Swin Transformer effectively captured both global and local imaging features, thereby improving fracture identification accuracy. When evaluating the impact of GAN-based data augmentation, the overall accuracy on an independent 10% test set increased from 89.24% to 91.52%, although the McNemar paired test did not reach statistical significance (p > 0.05). Under a stratified five-fold cross-validation setting, paired t-tests revealed statistically significant improvements in macro-averaged recall, F1 score, and AUC (all p < 0.05), with corresponding 95% confidence intervals and effect sizes (Cohen’s dz). Deep learning models, particularly the Swin Transformer, show strong potential for automated fracture-type classification and imaging-based evaluation using single time-point X-ray images, offering meaningful auxiliary support for clinical diagnosis and fracture-healing assessment. The findings reflect performance in fracture-type recognition rather than dynamic prediction of healing progression, as only cross-sectional X-ray images were analyzed. Future research incorporating longitudinal imaging and multi-center clinical cohorts will be essential for evaluating model performance in true fracture-healing prediction scenarios.

Ähnliche Arbeiten