Infinitely Divisible Noise in the Low Privacy Regime
- URL: http://arxiv.org/abs/2110.06559v1
- Date: Wed, 13 Oct 2021 08:16:43 GMT
- Title: Infinitely Divisible Noise in the Low Privacy Regime
- Authors: Rasmus Pagh, Nina Mesing Stausholm
- Abstract summary: Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning.
We present the first divisible infinitely noise distribution for real-valued data that achieves $varepsilon$-differential privacy.
- Score: 9.39772079241093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning, in which training data is distributed among users and
never shared, has emerged as a popular approach to privacy-preserving machine
learning. Cryptographic techniques such as secure aggregation are used to
aggregate contributions, like a model update, from all users. A robust
technique for making such aggregates differentially private is to exploit
infinite divisibility of the Laplace distribution, namely, that a Laplace
distribution can be expressed as a sum of i.i.d. noise shares from a Gamma
distribution, one share added by each user.
However, Laplace noise is known to have suboptimal error in the low privacy
regime for $\varepsilon$-differential privacy, where $\varepsilon > 1$ is a
large constant. In this paper we present the first infinitely divisible noise
distribution for real-valued data that achieves $\varepsilon$-differential
privacy and has expected error that decreases exponentially with $\varepsilon$.
Related papers
- A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Frequency Estimation Under Multiparty Differential Privacy: One-shot and
Streaming [10.952006057356714]
We study the fundamental problem of frequency estimation under both privacy and communication constraints, where the data is distributed among $k$ parties.
We adopt the model of multiparty differential privacy (MDP), which is more general than local differential privacy (LDP) and (centralized) differential privacy.
Our protocols achieve optimality (up to logarithmic factors) permissible by the more stringent of the two constraints.
arXiv Detail & Related papers (2021-04-05T08:15:20Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy
Amplification by Shuffling [49.43288037509783]
We show that random shuffling amplifies differential privacy guarantees of locally randomized data.
Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees.
arXiv Detail & Related papers (2020-12-23T17:07:26Z) - On the Intrinsic Differential Privacy of Bagging [69.70602220716718]
We show that Bagging achieves significantly higher accuracies than state-of-the-art differentially private machine learning methods with the same privacy budgets.
Our experimental results demonstrate that Bagging achieves significantly higher accuracies than state-of-the-art differentially private machine learning methods with the same privacy budgets.
arXiv Detail & Related papers (2020-08-22T14:17:55Z) - Learning discrete distributions: user vs item-level privacy [47.05234904407048]
Recently many practical applications such as federated learning require preserving privacy for all items of a single user.
We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy.
We propose a mechanism such that the number of users scales as $tildemathcalO(k/(malpha2) + k/sqrtmepsilonalpha)$ and hence the privacy penalty is $tildeTheta(sqrtm)$ times smaller compared to the standard mechanisms.
arXiv Detail & Related papers (2020-07-27T16:15:14Z) - BUDS: Balancing Utility and Differential Privacy by Shuffling [3.618133010429131]
Balancing utility and differential privacy by shuffling or textitBUDS is an approach towards crowd-sourced, statistical databases.
New algorithm is proposed using one-hot encoding and iterative shuffling with the loss estimation and risk minimization techniques.
During empirical test of balanced utility and privacy, BUDS produces $epsilon = 0.02$ which is a very promising result.
arXiv Detail & Related papers (2020-06-07T11:39:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.