Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems
- URL: http://arxiv.org/abs/2112.05307v4
- Date: Wed, 11 Sep 2024 08:56:42 GMT
- Title: Are We There Yet? Timing and Floating-Point Attacks on Differential Privacy Systems
- Authors: Jiankai Jin, Eleanor McMurtry, Benjamin I. P. Rubinstein, Olga Ohrimenko,
- Abstract summary: We study two implementation flaws in the noise generation commonly used in differentially private (DP) systems.
First we examine the Gaussian mechanism's susceptibility to a floating-point representation attack.
Second we study discrete counterparts of the Laplace and Gaussian mechanisms that suffer from another side channel: a novel timing attack.
- Score: 18.396937775602808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy is a de facto privacy framework that has seen adoption in practice via a number of mature software platforms. Implementation of differentially private (DP) mechanisms has to be done carefully to ensure end-to-end security guarantees. In this paper we study two implementation flaws in the noise generation commonly used in DP systems. First we examine the Gaussian mechanism's susceptibility to a floating-point representation attack. The premise of this first vulnerability is similar to the one carried out by Mironov in 2011 against the Laplace mechanism. Our experiments show attack's success against DP algorithms, including deep learning models trained using differentially-private stochastic gradient descent. In the second part of the paper we study discrete counterparts of the Laplace and Gaussian mechanisms that were previously proposed to alleviate the shortcomings of floating-point representation of real numbers. We show that such implementations unfortunately suffer from another side channel: a novel timing attack. An observer that can measure the time to draw (discrete) Laplace or Gaussian noise can predict the noise magnitude, which can then be used to recover sensitive attributes. This attack invalidates differential privacy guarantees of systems implementing such mechanisms. We demonstrate that several commonly used, state-of-the-art implementations of differential privacy are susceptible to these attacks. We report success rates up to 92.56% for floating-point attacks on DP-SGD, and up to 99.65% for end-to-end timing attacks on private sum protected with discrete Laplace. Finally, we evaluate and suggest partial mitigations.
Related papers
- Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - To Shuffle or not to Shuffle: Auditing DP-SGD with Shuffling [25.669347036509134]
We analyze Differentially Private Gradient Descent (DP-SGD) with shuffling.
We show that state-of-the-art DP models trained with shuffling appreciably overestimated privacy guarantees (up to 4x)
Our work empirically attests to the risk of using shuffling instead of Poisson sub-sampling vis-a-vis the actual privacy leakage of DP-SGD.
arXiv Detail & Related papers (2024-11-15T22:34:28Z) - Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - The Symmetric alpha-Stable Privacy Mechanism [0.0]
We present novel analysis of the Symmetric alpha-Stable (SaS) mechanism.
We prove that the mechanism is purely differentially private while remaining closed under convolution.
arXiv Detail & Related papers (2023-11-29T16:34:39Z) - Additive Logistic Mechanism for Privacy-Preserving Self-Supervised
Learning [26.783944764936994]
We study the privacy risks that are associated with training a neural network's weights with self-supervised learning algorithms.
We design a post-training privacy-protection algorithm that adds noise to the fine-tuned weights.
We show that the proposed protection algorithm can effectively reduce the attack accuracy to roughly 50%-equivalent to random guessing.
arXiv Detail & Related papers (2022-05-25T01:33:52Z) - Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian
Noise [59.47042225257565]
We present a novel planning method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous system into a discrete-state model that captures noise by probabilistic transitions between states.
We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP)
arXiv Detail & Related papers (2021-10-25T06:18:55Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Gaussian Processes with Differential Privacy [3.934224774675743]
We add strong privacy protection to Gaussian processes (GPs) via differential privacy (DP)
We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.
Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
arXiv Detail & Related papers (2021-06-01T13:23:16Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.