Stronger Privacy Amplification by Shuffling for R\'enyi and Approximate
Differential Privacy
- URL: http://arxiv.org/abs/2208.04591v2
- Date: Mon, 30 Oct 2023 20:20:04 GMT
- Title: Stronger Privacy Amplification by Shuffling for R\'enyi and Approximate
Differential Privacy
- Authors: Vitaly Feldman and Audra McMillan and Kunal Talwar
- Abstract summary: A key result in this model is that randomly shuffling locally randomized data amplifies differential privacy guarantees.
Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously.
In this work, we improve the state of the art privacy amplification by shuffling results both theoretically and numerically.
- Score: 43.33288245778629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The shuffle model of differential privacy has gained significant interest as
an intermediate trust model between the standard local and central models
[EFMRTT19; CSUZZ19]. A key result in this model is that randomly shuffling
locally randomized data amplifies differential privacy guarantees. Such
amplification implies substantially stronger privacy guarantees for systems in
which data is contributed anonymously [BEMMRLRKTS17].
In this work, we improve the state of the art privacy amplification by
shuffling results both theoretically and numerically. Our first contribution is
the first asymptotically optimal analysis of the R\'enyi differential privacy
parameters for the shuffled outputs of LDP randomizers. Our second contribution
is a new analysis of privacy amplification by shuffling. This analysis improves
on the techniques of [FMT20] and leads to tighter numerical bounds in all
parameter settings.
Related papers
- Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Renyi Differential Privacy of the Subsampled Shuffle Model in
Distributed Learning [7.197592390105457]
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.
Motivated by optimization and the federated learning (FL) paradigm, we focus on the case where a small fraction of data samples are randomly sub-sampled in each round.
To obtain even stronger local privacy guarantees, we study this in the shuffle privacy model, where each client randomizes its response using a local differentially private (LDP) mechanism.
arXiv Detail & Related papers (2021-07-19T11:43:24Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - Privacy Amplification via Random Check-Ins [38.72327434015975]
Differentially Private Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data.
In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients)
Our main contribution is the emphrandom check-in distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client.
arXiv Detail & Related papers (2020-07-13T18:14:09Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.