Local Differential Privacy Is Equivalent to Contraction of
$E_\gamma$-Divergence
- URL: http://arxiv.org/abs/2102.01258v1
- Date: Tue, 2 Feb 2021 02:18:12 GMT
- Title: Local Differential Privacy Is Equivalent to Contraction of
$E_\gamma$-Divergence
- Authors: Shahab Asoodeh, Maryam Aliakbarpour, and Flavio P. Calmon
- Abstract summary: We show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the $E_gamma$-divergence.
We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary $f$-divergences.
- Score: 7.807294944710216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the local differential privacy (LDP) guarantees of a
randomized privacy mechanism via its contraction properties. We first show that
LDP constraints can be equivalently cast in terms of the contraction
coefficient of the $E_\gamma$-divergence. We then use this equivalent formula
to express LDP guarantees of privacy mechanisms in terms of contraction
coefficients of arbitrary $f$-divergences. When combined with standard
estimation-theoretic tools (such as Le Cam's and Fano's converse methods), this
result allows us to study the trade-off between privacy and utility in several
testing and minimax and Bayesian estimation problems.
Related papers
- Private Language Models via Truncated Laplacian Mechanism [18.77713904999236]
We propose a novel private embedding method called the high dimensional truncated Laplacian mechanism.
We show that our method has a lower variance compared to the previous private word embedding methods.
Remarkably, even in the high privacy regime, our approach only incurs a slight decrease in utility compared to the non-private scenario.
arXiv Detail & Related papers (2024-10-10T15:25:02Z) - Beyond the Calibration Point: Mechanism Comparison in Differential Privacy [29.635987854560828]
In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(varepsilon, delta)$-pair.
This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(varepsilon, delta)$.
arXiv Detail & Related papers (2024-06-13T08:30:29Z) - Fixed-Budget Differentially Private Best Arm Identification [62.36929749450298]
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints.
We derive a minimax lower bound on the error probability, and demonstrate that the lower and the upper bounds decay exponentially in $T$.
arXiv Detail & Related papers (2024-01-17T09:23:25Z) - Unified Enhancement of Privacy Bounds for Mixture Mechanisms via
$f$-Differential Privacy [41.51051636162107]
This paper focuses on improving privacy bounds for shuffling models and one-iteration differentially private gradient descent.
We derive a closed-form expression of the trade-off function for shuffling models that outperforms the most up-to-date results.
We also study an $f$-DP analog of the advanced joint convexity of the hockey-stick divergence related to $(epsilon,delta)$-DP.
arXiv Detail & Related papers (2023-10-30T19:37:51Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - Nonparametric extensions of randomized response for private confidence sets [51.75485869914048]
This work derives methods for performing nonparametric, nonasymptotic statistical inference for population means under the constraint of local differential privacy (LDP)
We present confidence intervals (CI) and time-uniform confidence sequences (CS) for $mustar$ when only given access to the privatized data.
arXiv Detail & Related papers (2022-02-17T16:04:49Z) - Locally Differentially Private Bayesian Inference [23.882144188177275]
Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
arXiv Detail & Related papers (2021-10-27T13:36:43Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.