Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comprehensive Empirical Study on Privacy-Utility Trade-Offs in Deep Learning Architectures
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The use of deep learning algorithms in applications with sensitive data (e.g. medical images and personal photos) has raised privacy as a top concern. Differential Privacy (DP) provides a formalized approach to training models with strong privacy guarantees that are also quantifiable. Unfortunately, the downside to DP is often the lower utility of the model. In this paper, we provide a systematic empirical study on the privacy-utility trade-off across multiple recent deep learning architectures. We evaluate five models: EfficientNet B0, InceptionNet, MobileNetV2, ResNet18, and Vision Transformer (ViT) models, on two data sets: the general CIFAR-10 data set and a specific medical imaging data set, OCTMNIST. We implemented Differentially Private Stochastic Gradient Descent (DP-SGD) on each model across multiple privacy budget values. Overall, our findings show that the trade-off is highly dependent on both the model chosen and the properties of the dataset. For the CIFAR-10 dataset, we found severe performance loss, with Inception and MobileNetV2 failing to learn under DP conditions, while on the OCTMNIST, models displayed better robustness and EfficientNet B0 and InceptionNet retained a lot of utility with high privacy guarantees. We also found that the noise readily associated with training with DP can sometimes act as a regularizer and assistance to the models which do not generalize and deploy better than vanilla training. These findings show there is no “best” architecture for private learning and we stress the importance of model selection based on the specific application domain and privacy requirements.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.404 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.901 Zit.
Deep Learning with Differential Privacy
2016 · 5.634 Zit.
Federated Machine Learning
2019 · 5.604 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.595 Zit.