Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Federated Regularization Learning: an Accurate and Safe Method for Federated Learning
16
Zitationen
3
Autoren
2021
Jahr
Abstract
Distributed machine learning (ML) and other related techniques such as federated learning are facing a high risk of information leakage. Differential privacy (DP) is commonly used to protect privacy. However, it suffers from low accuracy due to the unbalanced data distribution in federated learning and additional noise brought by DP itself. In this paper, we propose a novel federated learning model that can protect data privacy from the gradient leakage attack and black-box membership inference attack (MIA). The proposed protection scheme makes the data hard to be reproduced and be distinguished from predictions. A small simulated attacker network is embedded as a regularization punishment to defend the malicious attacks. We further introduce a gradient modification method to secure the weight information and remedy the additional accuracy loss. The proposed privacy protection scheme is evaluated on MNIST and CIFAR-10, and compared with state-of-the-art DP-based federated learning models. Experimental results demonstrate that our model can successfully defend diverse external attacks to user-level privacy with negligible accuracy loss.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.402 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.895 Zit.
Deep Learning with Differential Privacy
2016 · 5.629 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.595 Zit.
Federated Machine Learning
2019 · 5.581 Zit.