Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploiting Unintended Feature Leakage in Collaborative Learning
87
Zitationen
4
Autoren
2019
Jahr
Abstract
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.382 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.480 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.383 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.323 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.516 Zit.