Smoothed Differential Privacy
- URL: http://arxiv.org/abs/2107.01559v4
- Date: Tue, 12 Dec 2023 23:49:49 GMT
- Title: Smoothed Differential Privacy
- Authors: Ao Liu, Yu-Xiang Wang, Lirong Xia
- Abstract summary: Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
- Score: 55.415581832037084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy (DP) is a widely-accepted and widely-applied notion of
privacy based on worst-case analysis. Often, DP classifies most mechanisms
without additive noise as non-private (Dwork et al., 2014). Thus, additive
noises are added to improve privacy (to achieve DP). However, in many
real-world applications, adding additive noise is undesirable (Bagdasaryan et
al., 2019) and sometimes prohibited (Liu et al., 2020).
In this paper, we propose a natural extension of DP following the worst
average-case idea behind the celebrated smoothed analysis (Spielman & Teng, May
2004). Our notion, smoothed DP, can effectively measure the privacy leakage of
mechanisms without additive noises under realistic settings. We prove that any
discrete mechanism with sampling procedures is more private than what DP
predicts, while many continuous mechanisms with sampling procedures are still
non-private under smoothed DP. In addition, we prove several desirable
properties of smoothed DP, including composition, robustness to
post-processing, and distribution reduction. Based on those properties, we
propose an efficient algorithm to calculate the privacy parameters for smoothed
DP. Experimentally, we verify that, according to smoothed DP, the discrete
sampling mechanisms are private in real-world elections, and some discrete
neural networks can be private without adding any additive noise. We believe
that these results contribute to the theoretical foundation of realistic
privacy measures beyond worst-case analysis.
Related papers
- Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Learning Numeric Optimal Differentially Private Truncated Additive
Mechanisms [5.079561894598125]
We introduce a tool to learn truncated noise for additive mechanisms with strong utility bounds.
We show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
For sensitivity bounded mechanisms, we show that it is sufficient to consider symmetric and thatnew, for from the mean monotonically falling noise.
arXiv Detail & Related papers (2021-07-27T17:22:57Z) - Practical Privacy Filters and Odometers with R\'enyi Differential
Privacy and Applications to Differentially Private Deep Learning [0.0]
We study DP composition under adaptive privacy budgets through the lens of R'enyi Differential Privacy.
We prove a simpler composition theorem with smaller constants, making it practical enough to use in algorithm design.
arXiv Detail & Related papers (2021-03-02T00:37:11Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.