Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A study of the effect of JPG compression on adversarial images
255
Zitationen
3
Autoren
2016
Jahr
Abstract
Neural network image classifiers are known to be vulnerable to adversarial images, i.e., natural images which have been modified by an adversarial perturbation specifically designed to be imperceptible to humans yet fool the classifier. Not only can adversarial images be generated easily, but these images will often be adversarial for networks trained on disjoint subsets of data or with different architectures. Adversarial images represent a potential security risk as well as a serious machine learning challenge---it is clear that vulnerable neural networks perceive images very differently from humans. Noting that virtually every image classification data set is composed of JPG images, we evaluate the effect of JPG compression on the classification of adversarial images. For Fast-Gradient-Sign perturbations of small magnitude, we found that JPG compression often reverses the drop in classification accuracy to a large extent, but not always. As the magnitude of the perturbations increases, JPG recompression alone is insufficient to reverse the effect.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.531 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.710 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.610 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.409 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.604 Zit.