Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
$f$-Differential Privacy
- URL: http://arxiv.org/abs/2310.19973v2
- Date: Wed, 1 Nov 2023 14:43:16 GMT
- Title: Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
$f$-Differential Privacy
- Authors: Chendi Wang, Buxin Su, Jiayuan Ye, Reza Shokri, Weijie J. Su
- Abstract summary: This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent.
We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results.
We also study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(epsilon,delta)$-DP.
- Score: 41.51051636162107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private (DP) machine learning algorithms incur many sources of
randomness, such as random initialization, random batch subsampling, and
shuffling. However, such randomness is difficult to take into account when
proving differential privacy bounds because it induces mixture distributions
for the algorithm's output that are difficult to analyze. This paper focuses on
improving privacy bounds for shuffling models and one-iteration differentially
private gradient descent (DP-GD) with random initializations using $f$-DP. We
derive a closed-form expression of the trade-off function for shuffling models
that outperforms the most up-to-date results based on $(\epsilon,\delta)$-DP.
Moreover, we investigate the effects of random initialization on the privacy of
one-iteration DP-GD. Our numerical computations of the trade-off function
indicate that random initialization can enhance the privacy of DP-GD. Our
analysis of $f$-DP guarantees for these mixture mechanisms relies on an
inequality for trade-off functions introduced in this paper. This inequality
implies the joint convexity of $F$-divergences. Finally, we study an $f$-DP
analog of the advanced joint convexity of the hockey-stick divergence related
to $(\epsilon,\delta)$-DP and apply it to analyze the privacy of mixture
mechanisms.
Related papers
- Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy [23.12198546384976]
Posterior sampling provides $varepsilon$-pure differential privacy guarantees.
It does not suffer from potentially unbounded privacy breach introduced by $(varepsilon,delta)$-approximate DP.
In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo.
arXiv Detail & Related papers (2023-10-23T07:54:39Z) - The Poisson binomial mechanism for secure and private federated learning [19.399122892615573]
We introduce a discrete differential privacy mechanism for distributed mean estimation (DME) with applications to federated learning and analytics.
We provide a tight analysis of its privacy guarantees, showing that it achieves the same privacy-accuracy trade-offs as the continuous Gaussian mechanism.
arXiv Detail & Related papers (2022-07-09T05:46:28Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Nonparametric extensions of randomized response for private confidence sets [51.75485869914048]
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP)
We present confidence intervals (CI) and time-uniform confidence sequences (CS) for $mustar$ when only given access to the privatized data.
arXiv Detail & Related papers (2022-02-17T16:04:49Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.