A Refreshment Stirred, Not Shaken (III): Can Swapping Be Differentially Private?
- URL: http://arxiv.org/abs/2504.15246v1
- Date: Mon, 21 Apr 2025 17:19:57 GMT
- Title: A Refreshment Stirred, Not Shaken (III): Can Swapping Be Differentially Private?
- Authors: James Bailie, Ruobin Gong, Xiao-Li Meng,
- Abstract summary: The quest for a precise and contextually grounded answer to the question in the papers title resulted in this theoretical basis of differential privacy (DP$2014$for example)<n>This paper provides summaries of the preceding two parts as well as new discussion$x2014$for example, on how greater awareness building blocks can thwart privacy.
- Score: 4.540236408836132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quest for a precise and contextually grounded answer to the question in the present paper's title resulted in this stirred-not-shaken triptych, a phrase that reflects our desire to deepen the theoretical basis, broaden the practical applicability, and reduce the misperception of differential privacy (DP)$\unicode{x2014}$all without shaking its core foundations. Indeed, given the existence of more than 200 formulations of DP (and counting), before even attempting to answer the titular question one must first precisely specify what it actually means to be DP. Motivated by this observation, a theoretical investigation into DP's fundamental essence resulted in Part I of this trio, which introduces a five-building-block system explicating the who, where, what, how and how much aspects of DP. Instantiating this system in the context of the United States Decennial Census, Part II then demonstrates the broader applicability and relevance of DP by comparing a swapping strategy like that used in 2010 with the TopDown Algorithm$\unicode{x2014}$a DP method adopted in the 2020 Census. This paper provides nontechnical summaries of the preceding two parts as well as new discussion$\unicode{x2014}$for example, on how greater awareness of the five building blocks can thwart privacy theatrics; how our results bridging traditional SDC and DP allow a data custodian to reap the benefits of both these fields; how invariants impact disclosure risk; and how removing the implicit reliance on aleatoric uncertainty could lead to new generalizations of DP.
Related papers
- Interpreting Network Differential Privacy [44.99833362998488]
We take a deep dive into a popular form of network DP ($varepsilon$--edge DP) to find that many of its common interpretations are flawed.<n>We demonstrate a gap between the pairs of hypotheses actually protected under DP and the sorts of hypotheses implied to be protected by common claims.
arXiv Detail & Related papers (2025-04-16T22:45:07Z) - Comparing privacy notions for protection against reconstruction attacks in machine learning [10.466570297146953]
In the machine learning community, reconstruction attacks are a principal concern and have been identified even in federated learning (FL)<n>In response to these threats, the privacy community recommends the use of differential privacy (DP) in the gradient descent algorithm, termed DP-SGD.<n>In this paper, we lay a foundational framework for comparing mechanisms with differing notions of privacy guarantees.
arXiv Detail & Related papers (2025-02-06T13:04:25Z) - A Refreshment Stirred, Not Shaken (II): Invariant-Preserving Deployments of Differential Privacy for the US Decennial Census [4.540236408836132]
We develop a statistical control (SDC) method for the U.S. Decennial Census.<n>We show that the PSA algorithm induces the invariant $varepsilon$s which can be reconciled with differential privacy (DP)<n>We show that while our results explicate the guarantees of SDC provided by the PSA, the DAS and the 2020 DAS must be taken in general to actual privacy protection $x2013$ just as is the case for any deployment.
arXiv Detail & Related papers (2025-01-14T21:38:01Z) - Verified Foundations for Differential Privacy [7.790536155623866]
We present SampCert, the first comprehensive, mechanized foundation for differential privacy.<n>It offers a generic notion of DP, a framework for constructing and composing DP mechanisms, and formally verified implementations of Laplace and Gaussian sampling algorithms.<n>Indeed, SampCert's verified algorithms power the DP offerings of Amazon Web Services (AWS)
arXiv Detail & Related papers (2024-12-02T16:19:47Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Three Variants of Differential Privacy: Lossless Conversion and
Applications [13.057076084452016]
We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
arXiv Detail & Related papers (2020-08-14T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.