Have it your way: Individualized Privacy Assignment for DP-SGD
- URL: http://arxiv.org/abs/2303.17046v2
- Date: Fri, 19 Jan 2024 15:33:12 GMT
- Title: Have it your way: Individualized Privacy Assignment for DP-SGD
- Authors: Franziska Boenisch, Christopher M\"uhl, Adam Dziedzic, Roy Rinberg,
Nicolas Papernot
- Abstract summary: We argue that setting a uniform privacy budget across all points may be overly conservative for some users or not sufficiently protective for others.
We capture these preferences through individualized privacy budgets.
We find it empirically improves privacy-utility trade-offs.
- Score: 33.758209383275926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When training a machine learning model with differential privacy, one sets a
privacy budget. This budget represents a maximal privacy violation that any
user is willing to face by contributing their data to the training set. We
argue that this approach is limited because different users may have different
privacy expectations. Thus, setting a uniform privacy budget across all points
may be overly conservative for some users or, conversely, not sufficiently
protective for others. In this paper, we capture these preferences through
individualized privacy budgets. To demonstrate their practicality, we introduce
a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which
supports such individualized budgets. DP-SGD is the canonical approach to
training models with differential privacy. We modify its data sampling and
gradient noising mechanisms to arrive at our approach, which we call
Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees
tailored to the preferences of individual users and their data points, we find
it empirically improves privacy-utility trade-offs.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Personalized Differential Privacy for Ridge Regression [3.4751583941317166]
We introduce our novel Personalized-DP Output Perturbation method ( PDP-OP) that enables to train Ridge regression models with individual per data point privacy levels.
We provide rigorous privacy proofs for our PDP-OP as well as accuracy guarantees for the resulting model.
We show that PDP-OP outperforms the personalized privacy techniques of Jorgensen et al.
arXiv Detail & Related papers (2024-01-30T16:00:14Z) - Personalized DP-SGD using Sampling Mechanisms [5.50042037663784]
We extend Differentially Private Gradient Descent (DP-SGD) to support a recent privacy notion called ($Phi$,$Delta$)- Personalized Differential Privacy (($Phi$,$Delta$)- PDP.
Our algorithm uses a multi-round personalized sampling mechanism and embeds it within the DP-SGD iteration.
Experiments on real datasets show that our algorithm outperforms DP-SGD and simple combinations of DP-SGD with existing PDP mechanisms.
arXiv Detail & Related papers (2023-05-24T13:56:57Z) - Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free [13.198689566654103]
This work considers the problem of mean estimation under heterogeneous Differential Privacy constraints.
The algorithm we propose is shown to be minimax optimal when there are two groups of users with distinct privacy levels.
arXiv Detail & Related papers (2023-04-27T05:23:06Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Personalized PATE: Differential Privacy for Machine Learning with
Individual Privacy Guarantees [1.2691047660244335]
We propose three novel methods to support training an ML model with different personalized privacy guarantees within the training data.
Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline.
arXiv Detail & Related papers (2022-02-21T20:16:27Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.