Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Feature Attributions and Counterfactual Explanations Can Be Manipulated
4
Zitationen
4
Autoren
2021
Jahr
Abstract
As machine learning models are increasingly used in critical decision-making settings (e.g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions. Such \textit{explanations} are used to understand and establish trust in models and are vital components in machine learning pipelines. Though explanations are a critical piece in these systems, there is little understanding about how they are vulnerable to manipulation by adversaries. In this paper, we discuss how two broad classes of explanations are vulnerable to manipulation. We demonstrate how adversaries can design biased models that manipulate model agnostic feature attribution methods (e.g., LIME \& SHAP) and counterfactual explanations that hill-climb during the counterfactual search (e.g., Wachter's Algorithm \& DiCE) into \textit{concealing} the model's biases. These vulnerabilities allow an adversary to deploy a biased model, yet explanations will not reveal this bias, thereby deceiving stakeholders into trusting the model. We evaluate the manipulations on real world data sets, including COMPAS and Communities \& Crime, and find explanations can be manipulated in practice.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.521 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.694 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.600 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.406 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.603 Zit.