Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms
- URL: http://arxiv.org/abs/2107.12957v1
- Date: Tue, 27 Jul 2021 17:22:57 GMT
- Title: Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms
- Authors: David M. Sommer, Lukas Abfalterer, Sheila Zingg and Esfandiar
Mohammadi
- Abstract summary: We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
- Score: 5.079561894598125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private (DP) mechanisms face the challenge of providing
accurate results while protecting their inputs: the privacy-utility trade-off.
A simple but powerful technique for DP adds noise to sensitivity-bounded query
outputs to blur the exact query output: additive mechanisms. While a vast body
of work considers infinitely wide noise distributions, some applications (e.g.,
real-time operating systems) require hard bounds on the deviations from the
real query, and only limited work on such mechanisms exist. An additive
mechanism with truncated noise (i.e., with bounded range) can offer such hard
bounds. We introduce a gradient-descent-based tool to learn truncated noise for
additive mechanisms with strong utility bounds while simultaneously optimizing
for differential privacy under sequential composition, i.e., scenarios where
multiple noisy queries on the same data are revealed. Our method can learn
discrete noise patterns and not only hyper-parameters of a predefined
probability distribution. For sensitivity bounded mechanisms, we show that it
is sufficient to consider symmetric and that\new{, for from the mean
monotonically falling noise,} ensuring privacy for a pair of representative
query outputs guarantees privacy for all pairs of inputs (that differ in one
element). We find that the utility-privacy trade-off curves of our generated
noise are remarkably close to truncated Gaussians and even replicate their
shape for $l_2$ utility-loss. For a low number of compositions, we also
improved DP-SGD (sub-sampling). Moreover, we extend Moments Accountant to
truncated distributions, allowing to incorporate mechanism output events with
varying input-dependent zero occurrence probability.
Related papers
- Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - Bounded and Unbiased Composite Differential Privacy [25.427802467876248]
The objective of differential privacy (DP) is to protect privacy by producing an output distribution that is indistinguishable between two neighboring databases.
Existing solutions attempt to address this issue by employing post-processing or truncation techniques.
We propose a novel differentially private mechanism which uses a composite probability density function to generate bounded and unbiased outputs.
arXiv Detail & Related papers (2023-11-04T04:43:47Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Optimum Noise Mechanism for Differentially Private Queries in Discrete Finite Sets [3.5379819043314176]
We introduce a novel framework for designing an optimal noise Mass Probability Function tailored to discrete and finite query sets.
Our framework seeks to optimize the noise distribution under an arbitrary $(epsilon, delta)$ constraint, thereby enhancing the accuracy and utility of the response.
Numerical experiments highlight the superior performance of our proposed optimal mechanisms compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-11-23T05:24:34Z) - Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian
Noise [59.47042225257565]
We present a novel planning method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous system into a discrete-state model that captures noise by probabilistic transitions between states.
We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP)
arXiv Detail & Related papers (2021-10-25T06:18:55Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - The Discrete Gaussian for Differential Privacy [23.977143445822897]
A key tool for building differentially private systems is adding Gaussian noise to the output of a function evaluated on a sensitive dataset.
Previous work has demonstrated that seemingly innocuous numerical errors can entirely destroy privacy.
We introduce and analyze the discrete Gaussian in the context of differential privacy.
arXiv Detail & Related papers (2020-03-31T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.