Numerical Composition of Differential Privacy
- URL: http://arxiv.org/abs/2106.02848v1
- Date: Sat, 5 Jun 2021 09:20:15 GMT
- Title: Numerical Composition of Differential Privacy
- Authors: Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz
- Abstract summary: We give a fast algorithm to optimally compose privacy guarantees of differentially private (DP) algorithms to arbitrary accuracy.
Our method is based on the notion of privacy loss random variables to quantify the privacy loss of DP algorithms.
- Score: 22.523399777002926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We give a fast algorithm to optimally compose privacy guarantees of
differentially private (DP) algorithms to arbitrary accuracy. Our method is
based on the notion of privacy loss random variables to quantify the privacy
loss of DP algorithms. The running time and memory needed for our algorithm to
approximate the privacy curve of a DP algorithm composed with itself $k$ times
is $\tilde{O}(\sqrt{k})$. This improves over the best prior method by Koskela
et al. (2020) which requires $\tilde{\Omega}(k^{1.5})$ running time. We
demonstrate the utility of our algorithm by accurately computing the privacy
loss of DP-SGD algorithm of Abadi et al. (2016) and showing that our algorithm
speeds up the privacy computations by a few orders of magnitude compared to
prior work, while maintaining similar accuracy.
Related papers
- Privacy-Computation trade-offs in Private Repetition and Metaselection [28.11248357424981]
A Private Repetition algorithm takes as input a differentially private algorithm with constant success probability and boosts it to one that succeeds with high probability.
Existing algorithms for these tasks pay either a large overhead in privacy cost, or a large overhead in computational cost.
This is in stark contrast with the non-private setting, where the failure probability falls exponentially in the computational overhead.
arXiv Detail & Related papers (2024-10-22T18:33:02Z) - Individualized Privacy Accounting via Subsampling with Applications in Combinatorial Optimization [55.81991984375959]
In this work, we give a new technique for analyzing individualized privacy accounting via the following simple observation.
We obtain several improved algorithms for private optimization problems, including decomposable submodular and set algorithm cover.
arXiv Detail & Related papers (2024-05-28T19:02:30Z) - Efficient Sparse Least Absolute Deviation Regression with Differential
Privacy [10.414082115202003]
We develop a fast privacy-preserving learning solution for a sparse robust regression problem.
Our algorithm achieves a fast estimation by reformulating the sparse LAD problem as a penalized least square estimation problem.
We show that our algorithm can achieve better privacy and statistical accuracy trade-off compared with the state-of-the-art privacy-preserving regression algorithms.
arXiv Detail & Related papers (2024-01-02T17:13:34Z) - Private Networked Federated Learning for Nonsmooth Objectives [7.278228169713637]
This paper develops a networked federated learning algorithm to solve nonsmooth objective functions.
We use the zero-concentrated differential privacy notion (zCDP) to guarantee the confidentiality of the participants.
We provide complete theoretical proof for the privacy guarantees and the algorithm's convergence to the exact solution.
arXiv Detail & Related papers (2023-06-24T16:13:28Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - No-Regret Algorithms for Private Gaussian Process Bandit Optimization [13.660643701487002]
We consider the ubiquitous problem of gaussian process (GP) bandit optimization from the lens of privacy-preserving statistics.
We propose a solution for differentially private GP bandit optimization that combines a uniform kernel approximator with random perturbations.
Our algorithms maintain differential privacy throughout the optimization procedure and critically do not rely explicitly on the sample path for prediction.
arXiv Detail & Related papers (2021-02-24T18:52:24Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.