Learning to be adversarially robust and differentially private
- URL: http://arxiv.org/abs/2201.02265v1
- Date: Thu, 6 Jan 2022 22:33:06 GMT
- Title: Learning to be adversarially robust and differentially private
- Authors: Jamie Hayes, Borja Balle, M. Pawan Kumar
- Abstract summary: We study the difficulties in learning that arise from robust and differentially private optimization.
Data dimensionality dependent term introduced by private optimization compounds difficulties of learning a robust model.
Size of adversarial generalization and clipping norm in differential privacy both increase the curvature of the loss landscape, implying poorer performance.
- Score: 42.7930886063265
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the difficulties in learning that arise from robust and
differentially private optimization. We first study convergence of gradient
descent based adversarial training with differential privacy, taking a simple
binary classification task on linearly separable data as an illustrative
example. We compare the gap between adversarial and nominal risk in both
private and non-private settings, showing that the data dimensionality
dependent term introduced by private optimization compounds the difficulties of
learning a robust model. After this, we discuss what parts of adversarial
training and differential privacy hurt optimization, identifying that the size
of adversarial perturbation and clipping norm in differential privacy both
increase the curvature of the loss landscape, implying poorer generalization
performance.
Related papers
- Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Enforcing Privacy in Distributed Learning with Performance Guarantees [57.14673504239551]
We study the privatization of distributed learning and optimization strategies.
We show that the popular additive random perturbation scheme degrades performance because it is not well-tuned to the graph structure.
arXiv Detail & Related papers (2023-01-16T13:03:27Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Mixed Differential Privacy in Computer Vision [133.68363478737058]
AdaMix is an adaptive differentially private algorithm for training deep neural network classifiers using both private and public image data.
A few-shot or even zero-shot learning baseline that ignores private data can outperform fine-tuning on a large private dataset.
arXiv Detail & Related papers (2022-03-22T06:15:43Z) - PEARL: Data Synthesis via Private Embeddings and Adversarial
Reconstruction Learning [1.8692254863855962]
We propose a new framework of data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion.
Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.
arXiv Detail & Related papers (2021-06-08T18:00:01Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privately Learning Markov Random Fields [44.95321417724914]
We consider the problem of learning Random Fields (including the Ising model) under the constraint of differential privacy.
We provide algorithms and lower bounds for both problems under a variety of privacy constraints.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.