On Differentially Private Online Predictions
- URL: http://arxiv.org/abs/2302.14099v1
- Date: Mon, 27 Feb 2023 19:18:01 GMT
- Title: On Differentially Private Online Predictions
- Authors: Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer
- Abstract summary: We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
- Score: 74.01773626153098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we introduce an interactive variant of joint differential
privacy towards handling online processes in which existing privacy definitions
seem too restrictive. We study basic properties of this definition and
demonstrate that it satisfies (suitable variants) of group privacy,
composition, and post processing. We then study the cost of interactive joint
privacy in the basic setting of online classification. We show that any
(possibly non-private) learning rule can be effectively transformed to a
private learning rule with only a polynomial overhead in the mistake bound.
This demonstrates a stark difference with more restrictive notions of privacy
such as the one studied by Golowich and Livni (2021), where only a double
exponential overhead on the mistake bound is known (via an information
theoretic upper bound).
Related papers
- Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Formalization of Differential Privacy in Isabelle/HOL [0.16574413179773761]
We propose an Isabelle/HOL library for formalizing differential privacy in a general setting.
To our knowledge, it is the first formalization of differential privacy that supports continuous probability distributions.
arXiv Detail & Related papers (2024-10-20T13:06:13Z) - Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - Optimal Private Discrete Distribution Estimation with One-bit Communication [63.413106413939836]
We consider a private discrete distribution estimation problem with one-bit communication constraint.
We characterize the first-orders of the worst-case trade-off under the one-bit communication constraint.
These results demonstrate the optimal dependence of the privacy-utility trade-off under the one-bit communication constraint.
arXiv Detail & Related papers (2023-10-17T05:21:19Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Differentially Private Supervised Manifold Learning with Applications
like Private Image Retrieval [14.93584434176082]
We present a novel differentially private method textitPrivateMail for supervised manifold learning.
We show extensive privacy-utility tradeoff results, as well as the computational efficiency and practicality of our methods.
arXiv Detail & Related papers (2021-02-22T06:58:46Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.