Heavy-Tailed Privacy: The Symmetric alpha-Stable Privacy Mechanism
- URL: http://arxiv.org/abs/2504.18411v1
- Date: Fri, 25 Apr 2025 15:14:02 GMT
- Title: Heavy-Tailed Privacy: The Symmetric alpha-Stable Privacy Mechanism
- Authors: Christopher C. Zawacki, Eyad H. Abed,
- Abstract summary: We present and analyze of the Symmetric alpha-Stable (SaS) mechanism.<n>We prove that the mechanism achieves pure differential privacy while remaining closed under convolution.<n>We also study the nuanced relationship between the level of privacy achieved and the parameters of the density.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid growth of digital platforms, there is increasing apprehension about how personal data is collected, stored, and used by various entities. These concerns arise from the increasing frequency of data breaches, cyber-attacks, and misuse of personal information for targeted advertising and surveillance. To address these matters, Differential Privacy (DP) has emerged as a prominent tool for quantifying a digital system's level of protection. The Gaussian mechanism is commonly used because the Gaussian density is closed under convolution, and is a common method utilized when aggregating datasets. However, the Gaussian mechanism only satisfies an approximate form of Differential Privacy. In this work, we present and analyze of the Symmetric alpha-Stable (SaS) mechanism. We prove that the mechanism achieves pure differential privacy while remaining closed under convolution. Additionally, we study the nuanced relationship between the level of privacy achieved and the parameters of the density. Lastly, we compare the expected error introduced to dataset queries by the Gaussian and SaS mechanisms. From our analysis, we believe the SaS Mechanism is an appealing choice for privacy-focused applications.
Related papers
- Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.<n>We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions [9.20186865054847]
In a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response.<n>We study the i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which these mechanisms guarantee privacy.
arXiv Detail & Related papers (2023-02-07T14:54:20Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z) - Differentially Private M-band Wavelet-Based Mechanisms in Machine
Learning Environments [4.629162607975834]
We develop three privacy-preserving mechanisms with the discrete M-band wavelet transform that embed noise into data.
We show that our mechanisms successfully retain both differential privacy and learnability through statistical analysis in various machine learning environments.
arXiv Detail & Related papers (2019-12-30T18:07:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.