Adaptive Privacy Composition for Accuracy-first Mechanisms
- URL: http://arxiv.org/abs/2306.13824v2
- Date: Tue, 5 Dec 2023 19:21:20 GMT
- Title: Adaptive Privacy Composition for Accuracy-first Mechanisms
- Authors: Ryan Rogers and Gennady Samorodnitsky and Zhiwei Steven Wu and Aaditya
Ramdas
- Abstract summary: Noise reduction mechanisms produce increasingly accurate answers.
Analysts only pay the privacy cost of the least noisy or most accurate answer released.
There has yet to be any study on how ex-post private mechanisms compose.
We develop privacy filters that allow an analyst to adaptively switch between differentially private and ex-post private mechanisms.
- Score: 55.53725113597539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many practical applications of differential privacy, practitioners seek to
provide the best privacy guarantees subject to a target level of accuracy. A
recent line of work by Ligett et al. '17 and Whitehouse et al. '22 has
developed such accuracy-first mechanisms by leveraging the idea of noise
reduction that adds correlated noise to the sufficient statistic in a private
computation and produces a sequence of increasingly accurate answers. A major
advantage of noise reduction mechanisms is that the analysts only pay the
privacy cost of the least noisy or most accurate answer released. Despite this
appealing property in isolation, there has not been a systematic study on how
to use them in conjunction with other differentially private mechanisms. A
fundamental challenge is that the privacy guarantee for noise reduction
mechanisms is (necessarily) formulated as ex-post privacy that bounds the
privacy loss as a function of the released outcome. Furthermore, there has yet
to be any study on how ex-post private mechanisms compose, which allows us to
track the accumulated privacy over several mechanisms. We develop privacy
filters [Rogers et al. '16, Feldman and Zrnic '21, and Whitehouse et al. '22']
that allow an analyst to adaptively switch between differentially private and
ex-post private mechanisms subject to an overall differential privacy
guarantee.
Related papers
- Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy
Constraints [53.01656650117495]
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs.
Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion.
We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm.
arXiv Detail & Related papers (2022-06-15T01:43:37Z) - Private measures, random walks, and synthetic data [7.5764890276775665]
Differential privacy is a mathematical concept that provides an information-theoretic security guarantee.
We develop a private measure from a data set that allows us to efficiently construct private synthetic data.
A key ingredient in our construction is a new superregular random walk, whose joint distribution of steps is as regular as that of independent random variables.
arXiv Detail & Related papers (2022-04-20T00:06:52Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Auditing Differentially Private Machine Learning: How Private is Private
SGD? [16.812900569416062]
We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis.
We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks.
arXiv Detail & Related papers (2020-06-13T20:00:18Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.