Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Protecting Machine Learning Models from Training Data Set Extraction
0
Zitationen
3
Autoren
2024
Jahr
Abstract
The problem of protecting machine learning models from the threat of data privacy violation implementing membership inference in training data sets is considered. A method of protective noising of the training set is proposed. It is experimentally shown that Gaussian noising of training data with a scale of 0.2 is the simplest and most effective way to protect machine learning models from membership inference in the training set. In comparison with alternatives, this method is easy to implement, universal in relation to types of models, and allows reducing the effectiveness of membership inference to 26 percentage points.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.402 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.895 Zit.
Deep Learning with Differential Privacy
2016 · 5.629 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.595 Zit.
Federated Machine Learning
2019 · 5.581 Zit.