Faster Differentially Private Samplers via R\'enyi Divergence Analysis
of Discretized Langevin MCMC
- URL: http://arxiv.org/abs/2010.14658v2
- Date: Thu, 17 Dec 2020 07:21:58 GMT
- Title: Faster Differentially Private Samplers via R\'enyi Divergence Analysis
of Discretized Langevin MCMC
- Authors: Arun Ganesh, Kunal Talwar
- Abstract summary: Langevin dynamics-based algorithms offer much faster alternatives under some distance measures such as statistical distance.
Our techniques simple and generic and apply to underdamped Langevin dynamics.
- Score: 35.050135428062795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various differentially private algorithms instantiate the exponential
mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable
function $f$. When the domain of the distribution is high-dimensional, this
sampling can be computationally challenging. Using heuristic sampling schemes
such as Gibbs sampling does not necessarily lead to provable privacy. When $f$
is convex, techniques from log-concave sampling lead to polynomial-time
algorithms, albeit with large polynomials. Langevin dynamics-based algorithms
offer much faster alternatives under some distance measures such as statistical
distance. In this work, we establish rapid convergence for these algorithms
under distance measures more suitable for differential privacy. For smooth,
strongly-convex $f$, we give the first results proving convergence in R\'enyi
divergence. This gives us fast differentially private algorithms for such $f$.
Our techniques and simple and generic and apply also to underdamped Langevin
dynamics.
Related papers
- Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Convergence Rates for Non-Log-Concave Sampling and Log-Partition
Estimation [0.0]
It is known that for $m$times differentiable functions in $d$, the optimal rate for algorithms with $n$(m/d) is to be $n(m/d).
We show that similar rates for sampling and computation are possible, and whether they can be realized in time with an independent rate of $d$.
arXiv Detail & Related papers (2023-03-06T15:53:44Z) - Privately Estimating a Gaussian: Efficient, Robust and Optimal [6.901744415870126]
We give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models.
In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error.
For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $widetilde O(d)$, improving on a $widetilde O(d1.5)$ bound from prior work.
arXiv Detail & Related papers (2022-12-15T18:27:39Z) - Resolving the Mixing Time of the Langevin Algorithm to its Stationary
Distribution for Log-Concave Sampling [34.66940399825547]
This paper characterizes the mixing time of the Langevin Algorithm to its stationary distribution.
We introduce a technique from the differential privacy literature to the sampling literature.
arXiv Detail & Related papers (2022-10-16T05:11:16Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Efficient Mean Estimation with Pure Differential Privacy via a
Sum-of-Squares Exponential Mechanism [16.996435043565594]
We give the first-time algorithm to estimate the mean of a $d$-positive probability distribution with covariance from $tildeO(d)$ independent samples subject to pure differential privacy.
Our main technique is a new approach to use the powerful Sum of Squares method (SoS) to design differentially private algorithms.
arXiv Detail & Related papers (2021-11-25T09:31:15Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Projected Stochastic Gradient Langevin Algorithms for Constrained
Sampling and Non-Convex Learning [0.0]
Langevin algorithms are methods with additive noise.
Langevin algorithms have been used for decades in chain Carlo (Milon)
For learning, we show that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that it is that
arXiv Detail & Related papers (2020-12-22T16:19:20Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.