Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints
- URL: http://arxiv.org/abs/2206.07234v4
- Date: Fri, 10 Nov 2023 22:23:49 GMT
- Title: Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints
- Authors: Justin Whitehouse, Zhiwei Steven Wu, Aaditya Ramdas, Ryan Rogers
- Abstract summary: There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
- Score: 53.01656650117495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a disconnect between how researchers and practitioners handle
privacy-utility tradeoffs. Researchers primarily operate from a privacy first
perspective, setting strict privacy requirements and minimizing risk subject to
these constraints. Practitioners often desire an accuracy first perspective,
possibly satisfied with the greatest privacy they can get subject to obtaining
sufficiently small error. Ligett et al. have introduced a "noise reduction"
algorithm to address the latter perspective. The authors show that by adding
correlated Laplace noise and progressively reducing it on demand, it is
possible to produce a sequence of increasingly accurate estimates of a private
parameter while only paying a privacy cost for the least noisy iterate
released. In this work, we generalize noise reduction to the setting of
Gaussian noise, introducing the Brownian mechanism. The Brownian mechanism
works by first adding Gaussian noise of high variance corresponding to the
final point of a simulated Brownian motion. Then, at the practitioner's
discretion, noise is gradually decreased by tracing back along the Brownian
path to an earlier time. Our mechanism is more naturally applicable to the
common setting of bounded $\ell_2$-sensitivity, empirically outperforms
existing work on common statistical tasks, and provides customizable control of
privacy loss over the entire interaction with the practitioner. We complement
our Brownian mechanism with ReducedAboveThreshold, a generalization of the
classical AboveThreshold algorithm that provides adaptive privacy guarantees.
Overall, our results demonstrate that one can meet utility constraints while
still maintaining strong levels of privacy.
Related papers
- Certification for Differentially Private Prediction in Gradient-Based Training [36.686002369773014]
We use convex relaxation and bound propagation to compute a provable upper-bound for the local and smooth sensitivity of a prediction.
This bound allows us to reduce the magnitude of noise added or improve privacy accounting in the private prediction setting.
arXiv Detail & Related papers (2024-06-19T10:47:00Z) - On the Privacy of Selection Mechanisms with Gaussian Noise [44.577599546904736]
We revisit the analysis of Report Noisy Max and Above Threshold with Gaussian noise.
We find that it is possible to provide pure ex-ante DP bounds for Report Noisy Max and pure ex-post DP bounds for Above Threshold.
arXiv Detail & Related papers (2024-02-09T02:11:25Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Adaptive Privacy Composition for Accuracy-first Mechanisms [55.53725113597539]
Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
arXiv Detail & Related papers (2023-06-24T00:33:34Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Privacy of Noisy Stochastic Gradient Descent: More Iterations without
More Privacy Loss [34.66940399825547]
Industry has widely adopted a simple algorithm: Gradient Descent with noise (a.k.a. Gradient Langevin Dynamics)
Questions about this algorithm's privacy loss remain open -- even in the seemingly simple setting of smooth convex losses over a bounded domain.
We characterize the differential privacy up to a constant factor and show that after a small burn-in period, running SGD longer leaks no further privacy.
arXiv Detail & Related papers (2022-05-27T02:09:55Z) - A Differentially Private Framework for Deep Learning with Convexified
Loss Functions [4.059849656394191]
Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets.
Existing DP practice falls into three categories - objective perturbation, gradient perturbation and output perturbation.
We propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron.
arXiv Detail & Related papers (2022-04-03T11:10:05Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - On Lower Bounds for Standard and Robust Gaussian Process Bandit
Optimization [55.937424268654645]
We consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm.
We provide a novel proof technique for deriving lower bounds on the regret, with benefits including simplicity, versatility, and an improved dependence on the error probability.
arXiv Detail & Related papers (2020-08-20T03:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.