Auditing Differential Privacy in High Dimensions with the Kernel Quantum
R\'enyi Divergence
- URL: http://arxiv.org/abs/2205.13941v1
- Date: Fri, 27 May 2022 12:34:17 GMT
- Title: Auditing Differential Privacy in High Dimensions with the Kernel Quantum
R\'enyi Divergence
- Authors: Carles Domingo-Enrich, Youssef Mroueh
- Abstract summary: We propose relaxations of differential privacy based on new divergences on probability distributions.
We show that the regularized kernel R'enyi divergence can be estimated from samples even in high dimensions.
- Score: 29.796646032324514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy (DP) is the de facto standard for private data release
and private machine learning. Auditing black-box DP algorithms and mechanisms
to certify whether they satisfy a certain DP guarantee is challenging,
especially in high dimension. We propose relaxations of differential privacy
based on new divergences on probability distributions: the kernel R\'enyi
divergence and its regularized version. We show that the regularized kernel
R\'enyi divergence can be estimated from samples even in high dimensions,
giving rise to auditing procedures for $\varepsilon$-DP,
$(\varepsilon,\delta)$-DP and $(\alpha,\varepsilon)$-R\'enyi DP.
Related papers
- Differential Privacy in Kernelized Contextual Bandits via Random Projections [8.658538065693206]
We consider the problem of contextual kernel bandits with contexts.<n>The underlying reward function belongs to a known Reproducing Kernel Hilbert Space.<n>We propose a novel algorithm that achieves the state-of-the-art cumulative regret of $widetildemathcalO(sqrtgamma_TT+fracgamma_Tvarepsilon_mathrmDP)$
arXiv Detail & Related papers (2025-07-18T03:54:49Z) - Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning [49.66162382667325]
We investigate the Generalized Gaussian mechanism, which samples the additive noise term $x$ with probability proportional to $e-frac| x |sigmabeta $ for some $beta geq 1$.<n>We show that privacy accounting for the GG Mechanism and its variants is independent, which substantially improves computational costs of privacy accounting.
arXiv Detail & Related papers (2025-06-14T15:49:25Z) - Comparing privacy notions for protection against reconstruction attacks in machine learning [10.466570297146953]
In the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL)<n>In response to these threats, the privacy community recommends the use of differential privacy (DP) in the gradient descent algorithm, termed DP-SGD.<n>In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees.
arXiv Detail & Related papers (2025-02-06T13:04:25Z) - Differentially Private Stochastic Gradient Descent with Fixed-Size Minibatches: Tighter RDP Guarantees with or without Replacement [6.494759487261151]
DP-SGD gradients in a fixed-size subsampling regime exhibit lower variance in practice in addition to memory usage benefits.
We show for the first time that the widely-used Poisson subsampling and FSwoR with replace-one adjacency have the same privacy to leading order in the sampling probability.
arXiv Detail & Related papers (2024-08-19T23:57:31Z) - Private Mean Estimation with Person-Level Differential Privacy [6.621676316292624]
We study person-level differentially private mean estimation in the case where each person holds multiple samples.
We give computationally efficient algorithms under approximate-DP and computationally inefficient algorithms under pure DP, and our nearly matching lower bounds hold for the most permissive case of approximate DP.
arXiv Detail & Related papers (2024-05-30T18:20:35Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Privacy Profiles for Private Selection [21.162924003105484]
We work out an easy-to-use recipe that bounds privacy profiles of ReportNoisyMax and PrivateTuning using the privacy profiles of the base algorithms they corral.
Our approach improves over all regimes of interest and leads to substantial benefits in end-to-end private learning experiments.
arXiv Detail & Related papers (2024-02-09T08:31:46Z) - Differentially-Private Bayes Consistency [70.92545332158217]
We construct a Bayes consistent learning rule that satisfies differential privacy (DP)
We prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal sample complexity.
arXiv Detail & Related papers (2022-12-08T11:57:30Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Shuffle Gaussian Mechanism for Differential Privacy [2.7564955518050693]
We study the mechanism's R'enyi differential privacy (RDP), showing that it is of the form: $$ epsilon(lambda) leq frac1lambda-1logleft(frace-da/2sigma2ndasum_substackk_+dotsc+k_n=lambda;k_nlambda!k_nlambda!k_nlambda!k_nlambda!
arXiv Detail & Related papers (2022-06-20T04:54:16Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Differentially Private Exploration in Reinforcement Learning with Linear
Representation [102.17246636801649]
We first consider the setting of linear-mixture MDPs (Ayoub et al., 2020) (a.k.a. model-based setting) and provide an unified framework for analyzing joint and local differential private (DP) exploration.
We further study privacy-preserving exploration in linear MDPs (Jin et al., 2020) (a.k.a. model-free setting) where we provide a $widetildeO(sqrtK/epsilon)$ regret bound for $(epsilon,delta)
arXiv Detail & Related papers (2021-12-02T19:59:50Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.