Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the Effect of Adversarial Attacks on Deep Learning Architectures for X-Ray Data
4
Zitationen
3
Autoren
2022
Jahr
Abstract
As artificial intelligent models continue to grow in their capacity and sophistication, they are often trusted with very sensitive information. In the sub-field of adversarial machine learning, developments are geared solely towards finding reliable methods to systematically erode the ability of artificial intelligent systems to perform as intended. These techniques can cause serious breaches of security, interruptions to major systems, and irreversible damage to consumers. Our research evaluates the effects of various white box adversarial machine learning attacks on popular computer vision deep learning models leveraging a public X-ray dataset from the National Institutes of Health (NIH). We make use of several experiments to gauge the feasibility of developing deep learning models that are robust to adversarial machine learning attacks by taking into account different defense strategies, such as adversarial training, to observe how adversarial attacks evolve over time. Our research details how a variety white box attacks effect different components of InceptionNet, DenseNet, and ResNeXt and suggest how the models can effectively defend against these attacks.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.382 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.480 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.383 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.323 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.516 Zit.