Individual Privacy Accounting with Gaussian Differential Privacy
- URL: http://arxiv.org/abs/2209.15596v2
- Date: Thu, 24 Aug 2023 14:00:18 GMT
- Title: Individual Privacy Accounting with Gaussian Differential Privacy
- Authors: Antti Koskela, Marlon Tobaben and Antti Honkela
- Abstract summary: Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis.
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms.
- Score: 8.81666701090743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Individual privacy accounting enables bounding differential privacy (DP) loss
individually for each participant involved in the analysis. This can be
informative as often the individual privacy losses are considerably smaller
than those indicated by the DP bounds that are based on considering worst-case
bounds at each data access. In order to account for the individual privacy
losses in a principled manner, we need a privacy accountant for adaptive
compositions of randomised mechanisms, where the loss incurred at a given data
access is allowed to be smaller than the worst-case loss. This kind of analysis
has been carried out for the R\'enyi differential privacy (RDP) by Feldman and
Zrnic (2021), however not yet for the so-called optimal privacy accountants. We
make first steps in this direction by providing a careful analysis using the
Gaussian differential privacy which gives optimal bounds for the Gaussian
mechanism, one of the most versatile DP mechanisms. This approach is based on
determining a certain supermartingale for the hockey-stick divergence and on
extending the R\'enyi divergence-based fully adaptive composition results by
Feldman and Zrnic. We also consider measuring the individual
$(\varepsilon,\delta)$-privacy losses using the so-called privacy loss
distributions. With the help of the Blackwell theorem, we can then make use of
the RDP analysis to construct an approximative individual
$(\varepsilon,\delta)$-accountant.
Related papers
- Avoiding Pitfalls for Privacy Accounting of Subsampled Mechanisms under Composition [13.192083588571384]
We consider the problem of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms.
Recent algorithms can numerically compute the privacy parameters to arbitrary precision but must be carefully applied.
arXiv Detail & Related papers (2024-05-27T20:30:12Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Fixed-Budget Differentially Private Best Arm Identification [62.36929749450298]
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints.
We derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in $T$.
arXiv Detail & Related papers (2024-01-17T09:23:25Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Optimal Accounting of Differential Privacy via Characteristic Function [25.78065563380023]
We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP and the PLD formalism) via the characteristic function ($phi$-function) of a certain worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP, provides exactly tight privacy accounting like PLD, and can be (often losslessly) converted to privacy profile and $f$-DP.
arXiv Detail & Related papers (2021-06-16T06:13:23Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Individual Privacy Accounting via a Renyi Filter [33.65665839496798]
We give a method for tighter privacy loss accounting based on the value of a personalized privacy loss estimate for each individual.
Our filter is simpler and tighter than the known filter for $(epsilon,delta)$-differential privacy by Rogers et al.
arXiv Detail & Related papers (2020-08-25T17:49:48Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.