Sampling from Log-Concave Distributions with Infinity-Distance
Guarantees and Applications to Differentially Private Optimization
- URL: http://arxiv.org/abs/2111.04089v1
- Date: Sun, 7 Nov 2021 13:44:50 GMT
- Title: Sampling from Log-Concave Distributions with Infinity-Distance
Guarantees and Applications to Differentially Private Optimization
- Authors: Oren Mangoubi and Nisheeth K. Vishnoi
- Abstract summary: We present an algorithm that outputs a point from a distributionO(varepsilon)$close to $$ in infinity-distance.
We also present a "soft-pi" version of the Dikin walk which may be independent interest.
- Score: 33.38289436686841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a $d$-dimensional log-concave distribution $\pi(\theta)\propto
e^{-f(\theta)}$ on a polytope $K$, we consider the problem of outputting
samples from a distribution $\nu$ which is $O(\varepsilon)$-close in
infinity-distance $\sup_{\theta\in K}|\log\frac{\nu(\theta)}{\pi(\theta)}|$ to
$\pi$. Such samplers with infinity-distance guarantees are specifically desired
for differentially private optimization as traditional sampling algorithms
which come with total-variation distance or KL divergence bounds are
insufficient to guarantee differential privacy. Our main result is an algorithm
that outputs a point from a distribution $O(\varepsilon)$-close to $\pi$ in
infinity-distance and requires
$O((md+dL^2R^2)\times(LR+d\log(\frac{Rd+LRd}{\varepsilon r}))\times
md^{\omega-1})$ arithmetic operations, where $f$ is $L$-Lipschitz, $K$ is
defined by $m$ inequalities, is contained in a ball of radius $R$ and contains
a ball of smaller radius $r$, and $\omega$ is the matrix-multiplication
constant. In particular this runtime is logarithmic in $\frac{1}{\varepsilon}$
and significantly improves on prior works. Technically, we depart from the
prior works that construct Markov chains on a
$\frac{1}{\varepsilon^2}$-discretization of $K$ to achieve a sample with
$O(\varepsilon)$ infinity-distance error, and present a method to convert
continuous samples from $K$ with total-variation bounds to samples with
infinity bounds. To achieve improved dependence on $d$, we present a
"soft-threshold" version of the Dikin walk which may be of independent
interest. Plugging our algorithm into the framework of the exponential
mechanism yields similar improvements in the running time of $\varepsilon$-pure
differentially private algorithms for optimization problems such as empirical
risk minimization of Lipschitz-convex functions and low-rank approximation,
while still achieving the tightest known utility bounds.
Related papers
- Testing Identity of Distributions under Kolmogorov Distance in Polylogarithmic Space [1.2277343096128712]
In this paper, we show that much less space suffices -- we give an algorithm that uses space $O(log4 varepsilon-1)$ in the streaming setting.
We also state 9 related open problems that we hope will spark interest in this and related problems.
arXiv Detail & Related papers (2024-10-29T15:24:27Z) - Robust Distribution Learning with Local and Global Adversarial Corruptions [17.22168727622332]
We develop an efficient finite-sample algorithm with error bounded by $sqrtvarepsilon k + rho + tildeO(dsqrtkn-1/(k lor 2))$ when $P$ has bounded covariance.
Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator.
arXiv Detail & Related papers (2024-06-10T17:48:36Z) - Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity [22.615156512223763]
We propose variance- optimistic sliding (SVOGS) method, which takes the advantage of the finite-sum structure in the objective.
We prove $mathcal O(delta D2/varepsilon)$, communication complexity of $mathcal O(n+sqrtndelta D2/varepsilon)$, and local calls of $tildemathcal O(n+sqrtndelta+L)D2/varepsilon)$.
arXiv Detail & Related papers (2024-05-25T08:34:49Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Faster Sampling from Log-Concave Distributions over Polytopes via a
Soft-Threshold Dikin Walk [28.431572772564518]
We consider the problem of sampling from a $d$-dimensional log-concave distribution $pi(theta) propto e-f(theta)$ constrained to a polytope $K$ defined by $m$ inequalities.
Our main result is a "soft-warm'' variant of the Dikin walk Markov chain that requires at most $O((md + d L2 R2) times MDomega-1) log(fracwdelta)$ arithmetic operations to sample from $pi$
arXiv Detail & Related papers (2022-06-19T11:33:07Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - On Robust Optimal Transport: Computational Complexity, Low-rank
Approximation, and Barycenter Computation [14.80695185915604]
We consider two robust versions of optimal transport, named $textitRobust Semi-constrained Optimal Transport$ (RSOT) and $textitRobust Unconstrained Optimal Transport$ (ROT)
For both problems in the discrete settings, we propose Sinkhorn-based algorithms that produce $varepsilon$-approximations of RSOT and ROT in $widetildemathcalO(fracn2varepsilon)$ time.
arXiv Detail & Related papers (2021-02-13T03:55:52Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.