Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
0
Zitationen
4
Autoren
2023
Jahr
Abstract
Randomized smoothing is the state-of-the-art approach to construct image classifiers that are provably robust against additive adversarial perturbations of bounded magnitude. However, it is more complicated to construct reasonable certificates against semantic transformation (e.g., image blurring, translation, gamma correction) and their compositions. In this work, we propose \emph{General Lipschitz (GL),} a new framework to certify neural networks against composable resolvable semantic perturbations. Within the framework, we analyze transformation-dependent Lipschitz-continuity of smoothed classifiers w.r.t. transformation parameters and derive corresponding robustness certificates. Our method performs comparably to state-of-the-art approaches on the ImageNet dataset.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.630 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.874 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.738 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.477 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.654 Zit.