Privacy Amplification of Iterative Algorithms via Contraction
Coefficients
- URL: http://arxiv.org/abs/2001.06546v1
- Date: Fri, 17 Jan 2020 22:06:48 GMT
- Title: Privacy Amplification of Iterative Algorithms via Contraction
Coefficients
- Authors: Shahab Asoodeh, Mario Diaz, and Flavio P. Calmon
- Abstract summary: We investigate the framework of privacy amplification by iteration, recently proposed by Feldman et al., from an information-theoretic lens.
We demonstrate that differential privacy guarantees of iterative mappings can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences.
- Score: 3.5270468102327004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the framework of privacy amplification by iteration, recently
proposed by Feldman et al., from an information-theoretic lens. We demonstrate
that differential privacy guarantees of iterative mappings can be determined by
a direct application of contraction coefficients derived from strong data
processing inequalities for $f$-divergences. In particular, by generalizing the
Dobrushin's contraction coefficient for total variation distance to an
$f$-divergence known as $E_{\gamma}$-divergence, we derive tighter bounds on
the differential privacy parameters of the projected noisy stochastic gradient
descent algorithm with hidden intermediate updates.
Related papers
- Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - On the Differentially Private Nature of Perturbed Gradient Descent [15.554148012395457]
We consider the problem of empirical minimization given a database, using the gradient descent algorithm.
A gradient descent algorithm is typically employed to escape the differential saddle points.
arXiv Detail & Related papers (2021-01-18T02:29:37Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Privacy Analysis of Online Learning Algorithms via Contraction
Coefficients [5.333582981327498]
We show that differential privacy guarantees can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for $f$-divergences.
We also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.
arXiv Detail & Related papers (2020-12-20T22:02:15Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.