Interpreting Network Differential Privacy
- URL: http://arxiv.org/abs/2504.12520v1
- Date: Wed, 16 Apr 2025 22:45:07 GMT
- Title: Interpreting Network Differential Privacy
- Authors: Jonathan Hehir, Xiaoyue Niu, Aleksandra Slavkovic,
- Abstract summary: We take a deep dive into a popular form of network DP ($varepsilon$--edge DP) to find that many of its common interpretations are flawed.<n>We demonstrate a gap between the pairs of hypotheses actually protected under DP and the sorts of hypotheses implied to be protected by common claims.
- Score: 44.99833362998488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How do we interpret the differential privacy (DP) guarantee for network data? We take a deep dive into a popular form of network DP ($\varepsilon$--edge DP) to find that many of its common interpretations are flawed. Drawing on prior work for privacy with correlated data, we interpret DP through the lens of adversarial hypothesis testing and demonstrate a gap between the pairs of hypotheses actually protected under DP (tests of complete networks) and the sorts of hypotheses implied to be protected by common claims (tests of individual edges). We demonstrate some conditions under which this gap can be bridged, while leaving some questions open. While some discussion is specific to edge DP, we offer selected results in terms of abstract DP definitions and provide discussion of the implications for other forms of network DP.
Related papers
- A Refreshment Stirred, Not Shaken (III): Can Swapping Be Differentially Private? [4.540236408836132]
The quest for a precise and contextually grounded answer to the question in the papers title resulted in this theoretical basis of differential privacy (DP$2014$for example)
This paper provides summaries of the preceding two parts as well as new discussion$x2014$for example, on how greater awareness building blocks can thwart privacy.
arXiv Detail & Related papers (2025-04-21T17:19:57Z) - Comparing privacy notions for protection against reconstruction attacks in machine learning [10.466570297146953]
In the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL)
In response to these threats, the privacy community recommends the use of differential privacy (DP) in the gradient descent algorithm, termed DP-SGD.
In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees.
arXiv Detail & Related papers (2025-02-06T13:04:25Z) - The Limits of Differential Privacy in Online Learning [11.099792269219124]
We present evidence that separates three types of constraints: no DP, pure DP, and approximate DP.<n>We first describe a hypothesis class that is online learnable under approximate DP but not online learnable under pure DP under the adaptive adversarial setting.<n>We then prove that any private online learner must make an infinite number of mistakes for almost all hypothesis classes.
arXiv Detail & Related papers (2024-11-08T11:21:31Z) - Granularity is crucial when applying differential privacy to text: An investigation for neural machine translation [13.692397169805806]
differential privacy (DP) is becoming increasingly popular in NLP.
The choice of granularity at which DP is applied is often neglected.
Our findings indicate that the document-level NMT system is more resistant to membership inference attacks.
arXiv Detail & Related papers (2024-07-26T14:52:37Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - Generalised Likelihood Ratio Testing Adversaries through the
Differential Privacy Lens [69.10072367807095]
Differential Privacy (DP) provides tight upper bounds on the capabilities of optimal adversaries.
We relax the assumption of a Neyman--Pearson (NPO) adversary to a Generalized Likelihood Test (GLRT) adversary.
This mild relaxation leads to improved privacy guarantees.
arXiv Detail & Related papers (2022-10-24T08:24:10Z) - Differentially Private Bayesian Neural Networks on Accuracy, Privacy and
Reliability [18.774153273396244]
We analyze the trade-off between privacy and accuracy in Bayesian neural network (BNN)
We propose three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct ways.
We show a new equivalence between DP-SGLD and DP-SGLD, implying that some non-Bayesian DP training naturally allows for uncertainty quantification.
arXiv Detail & Related papers (2021-07-18T14:37:07Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Three Variants of Differential Privacy: Lossless Conversion and
Applications [13.057076084452016]
We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
arXiv Detail & Related papers (2020-08-14T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.