OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.04.2026, 10:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comprehensive Empirical Study on Privacy-Utility Trade-Offs in Deep Learning Architectures

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

The use of deep learning algorithms in applications with sensitive data (e.g. medical images and personal photos) has raised privacy as a top concern. Differential Privacy (DP) provides a formalized approach to training models with strong privacy guarantees that are also quantifiable. Unfortunately, the downside to DP is often the lower utility of the model. In this paper, we provide a systematic empirical study on the privacy-utility trade-off across multiple recent deep learning architectures. We evaluate five models: EfficientNet B0, InceptionNet, MobileNetV2, ResNet18, and Vision Transformer (ViT) models, on two data sets: the general CIFAR-10 data set and a specific medical imaging data set, OCTMNIST. We implemented Differentially Private Stochastic Gradient Descent (DP-SGD) on each model across multiple privacy budget values. Overall, our findings show that the trade-off is highly dependent on both the model chosen and the properties of the dataset. For the CIFAR-10 dataset, we found severe performance loss, with Inception and MobileNetV2 failing to learn under DP conditions, while on the OCTMNIST, models displayed better robustness and EfficientNet B0 and InceptionNet retained a lot of utility with high privacy guarantees. We also found that the noise readily associated with training with DP can sometimes act as a regularizer and assistance to the models which do not generalize and deploy better than vanilla training. These findings show there is no “best” architecture for private learning and we stress the importance of model selection based on the specific application domain and privacy requirements.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen