Fast parallel sampling under isoperimetry
- URL: http://arxiv.org/abs/2401.09016v1
- Date: Wed, 17 Jan 2024 07:24:18 GMT
- Title: Fast parallel sampling under isoperimetry
- Authors: Nima Anari, Sinho Chewi, Thuy-Duong Vuong
- Abstract summary: We show how to sample in parallel from a distribution $pi$ over $mathbb Rd$ that satisfies a log-Sobolev inequality.
We show that our algorithm outputs samples from a distribution $hatpi$ that is close to $pi$ in Kullback--Leibler divergence.
- Score: 10.619901778151336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show how to sample in parallel from a distribution $\pi$ over $\mathbb
R^d$ that satisfies a log-Sobolev inequality and has a smooth log-density, by
parallelizing the Langevin (resp. underdamped Langevin) algorithms. We show
that our algorithm outputs samples from a distribution $\hat\pi$ that is close
to $\pi$ in Kullback--Leibler (KL) divergence (resp. total variation (TV)
distance), while using only $\log(d)^{O(1)}$ parallel rounds and
$\widetilde{O}(d)$ (resp. $\widetilde O(\sqrt d)$) gradient evaluations in
total. This constitutes the first parallel sampling algorithms with TV distance
guarantees.
For our main application, we show how to combine the TV distance guarantees
of our algorithms with prior works and obtain RNC sampling-to-counting
reductions for families of discrete distribution on the hypercube $\{\pm 1\}^n$
that are closed under exponential tilts and have bounded covariance.
Consequently, we obtain an RNC sampler for directed Eulerian tours and
asymmetric determinantal point processes, resolving open questions raised in
prior works.
Related papers
- Rényi-infinity constrained sampling with $d^3$ membership queries [2.209921757303168]
We propose a constrained proximal sampler, a principled and simple algorithm that possesses elegant convergence guarantees.
We show that it converges in the R'enyi-infinity divergence ($mathcal R_infty$) with no query complexity overhead when starting from a warm start.
arXiv Detail & Related papers (2024-07-17T19:20:08Z) - Faster Diffusion Sampling with Randomized Midpoints: Sequential and Parallel [10.840582511203024]
We show that our algorithm can be parallelized to run in only $widetilde O(log2 d)$ parallel rounds.
We also show that our algorithm can be parallelized to run in only $widetilde O(log2 d)$ parallel rounds.
arXiv Detail & Related papers (2024-06-03T01:34:34Z) - Perfect Sampling from Pairwise Comparisons [26.396901523831534]
We study how to efficiently obtain perfect samples from a discrete distribution $mathcalD$ given access only to pairwise comparisons of elements of its support.
We design a Markov chain whose stationary distribution coincides with $mathcalD$ and give an algorithm to obtain exact samples using the technique of Coupling from the Past.
arXiv Detail & Related papers (2022-11-23T11:20:30Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - Optimal Sublinear Sampling of Spanning Trees and Determinantal Point
Processes via Average-Case Entropic Independence [3.9586758145580014]
We design fast algorithms for repeatedly sampling from strongly Rayleigh distributions.
For a graph $G=(V, E)$, we show how to approximately sample uniformly random spanning trees from $G$ in $widetildeO(lvert Vrvert)$ time per sample.
For a determinantal point process on subsets of size $k$ of a ground set of $n$ elements, we show how to approximately sample in $widetildeO(komega)$ time after an initial $widetildeO(nk
arXiv Detail & Related papers (2022-04-06T04:11:26Z) - Random quantum circuits transform local noise into global white noise [118.18170052022323]
We study the distribution over measurement outcomes of noisy random quantum circuits in the low-fidelity regime.
For local noise that is sufficiently weak and unital, correlations (measured by the linear cross-entropy benchmark) between the output distribution $p_textnoisy$ of a generic noisy circuit instance shrink exponentially.
If the noise is incoherent, the output distribution approaches the uniform distribution $p_textunif$ at precisely the same rate.
arXiv Detail & Related papers (2021-11-29T19:26:28Z) - Complexity of zigzag sampling algorithm for strongly log-concave
distributions [6.336005544376984]
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions.
We prove that the zigzag sampling algorithm achieves $varepsilon$ error in chi-square divergence with a computational cost equivalent to $Obigl.
arXiv Detail & Related papers (2020-12-21T03:10:21Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Locally Private Hypothesis Selection [96.06118559817057]
We output a distribution from $mathcalQ$ whose total variation distance to $p$ is comparable to the best such distribution.
We show that the constraint of local differential privacy incurs an exponential increase in cost.
Our algorithms result in exponential improvements on the round complexity of previous methods.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.