Optimal Accounting of Differential Privacy via Characteristic Function
- URL: http://arxiv.org/abs/2106.08567v1
- Date: Wed, 16 Jun 2021 06:13:23 GMT
- Title: Optimal Accounting of Differential Privacy via Characteristic Function
- Authors: Yuqing Zhu, Jinshuo Dong and Yu-Xiang Wang
- Abstract summary: We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP and the PLD formalism) via the characteristic function ($phi$-function) of a certain worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP, provides exactly tight privacy accounting like PLD, and can be (often losslessly) converted to privacy profile and $f$-DP.
- Score: 25.78065563380023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Characterizing the privacy degradation over compositions, i.e., privacy
accounting, is a fundamental topic in differential privacy (DP) with many
applications to differentially private machine learning and federated learning.
We propose a unification of recent advances (Renyi DP, privacy profiles,
$f$-DP and the PLD formalism) via the characteristic function ($\phi$-function)
of a certain ``worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP,
provides exactly tight privacy accounting like PLD, and can be (often
losslessly) converted to privacy profile and $f$-DP, thus providing
$(\epsilon,\delta)$-DP guarantees and interpretable tradeoff functions.
Algorithmically, we propose an analytical Fourier accountant that represents
the complex logarithm of $\phi$-functions symbolically and uses Gaussian
quadrature for numerical computation. On several popular DP mechanisms and
their subsampled counterparts, we demonstrate the flexibility and tightness of
our approach in theory and experiments.
Related papers
- How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - Privacy Profiles for Private Selection [21.162924003105484]
We work out an easy-to-use recipe that bounds privacy profiles of ReportNoisyMax and PrivateTuning using the privacy profiles of the base algorithms they corral.
Our approach improves over all regimes of interest and leads to substantial benefits in end-to-end private learning experiments.
arXiv Detail & Related papers (2024-02-09T08:31:46Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Individual Privacy Accounting with Gaussian Differential Privacy [8.81666701090743]
Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis.
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms.
arXiv Detail & Related papers (2022-09-30T17:19:40Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.