Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens
- URL: http://arxiv.org/abs/2210.13028v1
- Date: Mon, 24 Oct 2022 08:24:10 GMT
- Title: Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens
- Authors: Georgios Kaissis, Alexander Ziller, Stefan Kolek Martinez de Azagra,
Daniel Rueckert
- Abstract summary: Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
- Score: 69.10072367807095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential Privacy (DP) provides tight upper bounds on the capabilities of
optimal adversaries, but such adversaries are rarely encountered in practice.
Under the hypothesis testing/membership inference interpretation of DP, we
examine the Gaussian mechanism and relax the usual assumption of a
Neyman-Pearson-Optimal (NPO) adversary to a Generalized Likelihood Test (GLRT)
adversary. This mild relaxation leads to improved privacy guarantees, which we
express in the spirit of Gaussian DP and $(\varepsilon, \delta)$-DP, including
composition and sub-sampling results. We evaluate our results numerically and
find them to match the theoretical upper bounds.
Related papers
- Robust Predictive Uncertainty and Double Descent in Contaminated Bayesian Random Features [9.140494844209336]
We propose a robust Bayesian formulation of random feature (RF) regression that accounts explicitly for prior and likelihood misspecification.<n>We derive explicit and tractable bounds for the resulting lower and upper posterior predictive envelopes.
arXiv Detail & Related papers (2026-02-22T10:50:04Z) - Near-Optimal Private Tests for Simple and MLR Hypotheses [13.738306418341729]
We develop a near-optimal testing procedure under the framework of Gaussian differential privacy.<n>We construct private test statistics that achieve the same relative efficiency as the non-private, most powerful tests.<n>Our tests offer comparable power to the non-private most powerful tests, even at moderately small sample sizes and privacy loss budgets.
arXiv Detail & Related papers (2026-01-29T16:36:21Z) - DP-SPRT: Differentially Private Sequential Probability Ratio Tests [18.783606628556342]
We revisit Wald's celebrated Sequential Probability Ratio Test for sequential tests of two simple hypotheses, under privacy constraints.<n>We propose DP-SPRT, a wrapper that can be calibrated to achieve desired error probabilities and privacy constraints.
arXiv Detail & Related papers (2025-08-08T15:09:13Z) - The Cost of Shuffling in Private Gradient Based Optimization [40.31928071333575]
We show that data shuffling results in worse empirical excess risk for textitDP-ShuffleG compared to DP-SGD.<n>We propose textitInterleaved-ShuffleG, a hybrid approach that integrates public data samples in private optimization.
arXiv Detail & Related papers (2025-02-05T22:30:00Z) - Confidence Aware Learning for Reliable Face Anti-spoofing [52.23271636362843]
We propose a Confidence Aware Face Anti-spoofing model, which is aware of its capability boundary.
We estimate its confidence during the prediction of each sample.
Experiments show that the proposed CA-FAS can effectively recognize samples with low prediction confidence.
arXiv Detail & Related papers (2024-11-02T14:29:02Z) - Convergent Privacy Loss of Noisy-SGD without Convexity and Smoothness [16.303040664382138]
We study the Differential Privacy (DP) guarantee of hidden-state Noisy-SGD algorithms over a bounded domain.
We prove convergent R'enyi DP bound for non-smooth non-smooth losses.
We also provide a strictly better privacy bound compared to state-of-the-art results for smooth convex losses.
arXiv Detail & Related papers (2024-10-01T20:52:08Z) - Ensembled Prediction Intervals for Causal Outcomes Under Hidden
Confounding [49.1865229301561]
We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals.
The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.
arXiv Detail & Related papers (2023-06-15T21:42:40Z) - Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions [49.726408540784334]
Key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support.
We show that our pessimistic estimate is the best possible among all pessimistic estimates.
arXiv Detail & Related papers (2022-07-10T04:25:02Z) - Generalized Likelihood Ratio Test for Adversarially Robust Hypothesis
Testing [22.93223530210401]
We consider a classical hypothesis testing problem in order to develop insight into defending against such adversarial perturbations.
We propose a defense based on applying the generalized likelihood ratio test (GLRT) to the resulting composite hypothesis testing problem.
We show via simulations that the GLRT defense is competitive with the minimax approach under the worst-case attack, while yielding a better-accuracy tradeoff under weaker attacks.
arXiv Detail & Related papers (2021-12-04T01:11:54Z) - A unified interpretation of the Gaussian mechanism for differential
privacy through the sensitivity index [61.675604648670095]
We argue that the three prevailing interpretations of the GM, namely $(varepsilon, delta)$-DP, f-DP and R'enyi DP can be expressed by using a single parameter $psi$, which we term the sensitivity index.
$psi$ uniquely characterises the GM and its properties by encapsulating its two fundamental quantities: the sensitivity of the query and the magnitude of the noise perturbation.
arXiv Detail & Related papers (2021-09-22T06:20:01Z) - Local Differential Privacy Is Equivalent to Contraction of
$E_\gamma$-Divergence [7.807294944710216]
We show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the $E_gamma$-divergence.
We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary $f$-divergences.
arXiv Detail & Related papers (2021-02-02T02:18:12Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.