Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Hidden Adversarial Vulnerabilities of Medical Federated Learning
1
Zitationen
4
Autoren
2023
Jahr
Abstract
In this paper, we delve into the susceptibility of federated medical image analysis systems to adversarial attacks. Our analysis uncovers a novel exploitation avenue: using gradient information from prior global model updates, adversaries can enhance the efficiency and transferability of their attacks. Specifically, we demonstrate that single-step attacks (e.g. FGSM), when aptly initialized, can outperform the efficiency of their iterative counterparts but with reduced computational demand. Our findings underscore the need to revisit our understanding of AI security in federated healthcare settings.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.437 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.584 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.477 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.361 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.544 Zit.