Privately Publishable Per-instance Privacy
- URL: http://arxiv.org/abs/2111.02281v1
- Date: Wed, 3 Nov 2021 15:17:29 GMT
- Title: Privately Publishable Per-instance Privacy
- Authors: Rachel Redberg, Yu-Xiang Wang
- Abstract summary: We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
- Score: 21.775752827149383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider how to privately share the personalized privacy losses incurred
by objective perturbation, using per-instance differential privacy (pDP).
Standard differential privacy (DP) gives us a worst-case bound that might be
orders of magnitude larger than the privacy loss to a particular individual
relative to a fixed dataset. The pDP framework provides a more fine-grained
analysis of the privacy guarantee to a target individual, but the per-instance
privacy loss itself might be a function of sensitive data. In this paper, we
analyze the per-instance privacy loss of releasing a private empirical risk
minimizer learned via objective perturbation, and propose a group of methods to
privately and accurately publish the pDP losses at little to no additional
privacy cost.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Confounding Privacy and Inverse Composition [32.85314813605347]
In differential privacy, sensitive information is contained in the dataset while in Pufferfish privacy, sensitive information determines data distribution.
We introduce a novel privacy notion of ($epsilon, delta$)-confounding privacy that generalizes both differential privacy and Pufferfish privacy.
arXiv Detail & Related papers (2024-08-21T21:45:13Z) - Privately Answering Queries on Skewed Data via Per Record Differential Privacy [8.376475518184883]
We propose a privacy formalism, per-record zero concentrated differential privacy (PzCDP)
Unlike other formalisms which provide different privacy losses to different records, PzCDP's privacy loss depends explicitly on the confidential data.
arXiv Detail & Related papers (2023-10-19T15:24:49Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Echo of Neighbors: Privacy Amplification for Personalized Private
Federated Learning with Shuffle Model [21.077469463027306]
Federated Learning, as a popular paradigm for collaborative training, is vulnerable to privacy attacks.
This work builds up to strengthen model privacy under personalized local privacy by leveraging the privacy amplification effect of the shuffle model.
To the best of our knowledge, the impact of shuffling on personalized local privacy is considered for the first time.
arXiv Detail & Related papers (2023-04-11T21:48:42Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.