Personalized Differential Privacy for Ridge Regression
- URL: http://arxiv.org/abs/2401.17127v1
- Date: Tue, 30 Jan 2024 16:00:14 GMT
- Title: Personalized Differential Privacy for Ridge Regression
- Authors: Krishna Acharya, Franziska Boenisch, Rakshit Naidu, Juba Ziani
- Abstract summary: We introduce our novel Personalized-DP Output Perturbation method ( PDP-OP) that enables to train Ridge regression models with individual per data point privacy levels.
We provide rigorous privacy proofs for our PDP-OP as well as accuracy guarantees for the resulting model.
We show that PDP-OP outperforms the personalized privacy techniques of Jorgensen et al.
- Score: 3.4751583941317166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increased application of machine learning (ML) in sensitive domains
requires protecting the training data through privacy frameworks, such as
differential privacy (DP). DP requires to specify a uniform privacy level
$\varepsilon$ that expresses the maximum privacy loss that each data point in
the entire dataset is willing to tolerate. Yet, in practice, different data
points often have different privacy requirements. Having to set one uniform
privacy level is usually too restrictive, often forcing a learner to guarantee
the stringent privacy requirement, at a large cost to accuracy. To overcome
this limitation, we introduce our novel Personalized-DP Output Perturbation
method (PDP-OP) that enables to train Ridge regression models with individual
per data point privacy levels. We provide rigorous privacy proofs for our
PDP-OP as well as accuracy guarantees for the resulting model. This work is the
first to provide such theoretical accuracy guarantees when it comes to
personalized DP in machine learning, whereas previous work only provided
empirical evaluations. We empirically evaluate PDP-OP on synthetic and real
datasets and with diverse privacy distributions. We show that by enabling each
data point to specify their own privacy requirement, we can significantly
improve the privacy-accuracy trade-offs in DP. We also show that PDP-OP
outperforms the personalized privacy techniques of Jorgensen et al. (2015).
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Mean Estimation Under Heterogeneous Privacy: Some Privacy Can Be Free [13.198689566654103]
This work considers the problem of mean estimation under heterogeneous Differential Privacy constraints.
The algorithm we propose is shown to be minimax optimal when there are two groups of users with distinct privacy levels.
arXiv Detail & Related papers (2023-04-27T05:23:06Z) - Have it your way: Individualized Privacy Assignment for DP-SGD [33.758209383275926]
We argue that setting a uniform privacy budget across all points may be overly conservative for some users or not sufficiently protective for others.
We capture these preferences through individualized privacy budgets.
We find it empirically improves privacy-utility trade-offs.
arXiv Detail & Related papers (2023-03-29T22:18:47Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - Personalized PATE: Differential Privacy for Machine Learning with
Individual Privacy Guarantees [1.2691047660244335]
We propose three novel methods to support training an ML model with different personalized privacy guarantees within the training data.
Our experiments show that our personalized privacy methods yield higher accuracy models than the non-personalized baseline.
arXiv Detail & Related papers (2022-02-21T20:16:27Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.