On the utility and protection of optimization with differential privacy
and classic regularization techniques
- URL: http://arxiv.org/abs/2209.03175v1
- Date: Wed, 7 Sep 2022 14:10:21 GMT
- Title: On the utility and protection of optimization with differential privacy
and classic regularization techniques
- Authors: Eugenio Lomurno, Matteo matteucci
- Abstract summary: We study the effectiveness of the differentially-private descent (DP-SGD) algorithm against standard optimization practices with regularization techniques.
We discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
- Score: 9.413131350284083
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Nowadays, owners and developers of deep learning models must consider
stringent privacy-preservation rules of their training data, usually
crowd-sourced and retaining sensitive information. The most widely adopted
method to enforce privacy guarantees of a deep learning model nowadays relies
on optimization techniques enforcing differential privacy. According to the
literature, this approach has proven to be a successful defence against several
models' privacy attacks, but its downside is a substantial degradation of the
models' performance. In this work, we compare the effectiveness of the
differentially-private stochastic gradient descent (DP-SGD) algorithm against
standard optimization practices with regularization techniques. We analyze the
resulting models' utility, training performance, and the effectiveness of
membership inference and model inversion attacks against the learned models.
Finally, we discuss differential privacy's flaws and limits and empirically
demonstrate the often superior privacy-preserving properties of dropout and
l2-regularization.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Too Good to be True? Turn Any Model Differentially Private With DP-Weights [0.0]
We introduce a groundbreaking approach that applies differential privacy noise to the model's weights after training.
We offer a comprehensive mathematical proof for this novel approach's privacy bounds.
We empirically evaluate its effectiveness using membership inference attacks and performance evaluations.
arXiv Detail & Related papers (2024-06-27T19:58:11Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks [7.0895962209555465]
Discriminative Adversarial Privacy (DAP) is a learning technique designed to achieve a balance between model performance, speed, and privacy.
DAP relies on adversarial training based on a novel loss function able to minimise the prediction error while maximising the MIA's error.
In addition, we introduce a novel metric named Accuracy Over Privacy (AOP) to capture the performance-privacy trade-off.
arXiv Detail & Related papers (2023-06-05T17:25:45Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - One-shot Empirical Privacy Estimation for Federated Learning [43.317478030880956]
"One-shot" approach allows efficient auditing or estimation of the privacy loss of a model during the same, single training run used to fit model parameters.
We show that our method provides provably correct estimates for the privacy loss under the Gaussian mechanism.
arXiv Detail & Related papers (2023-02-06T19:58:28Z) - Enforcing Privacy in Distributed Learning with Performance Guarantees [57.14673504239551]
We study the privatization of distributed learning and optimization strategies.
We show that the popular additive random perturbation scheme degrades performance because it is not well-tuned to the graph structure.
arXiv Detail & Related papers (2023-01-16T13:03:27Z) - PEARL: Data Synthesis via Private Embeddings and Adversarial
Reconstruction Learning [1.8692254863855962]
We propose a new framework of data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion.
Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.
arXiv Detail & Related papers (2021-06-08T18:00:01Z) - Tempered Sigmoid Activations for Deep Learning with Differential Privacy [33.574715000662316]
We show that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning.
We achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals.
arXiv Detail & Related papers (2020-07-28T13:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.