High-accuracy sampling for diffusion models and log-concave distributions
- URL: http://arxiv.org/abs/2602.01338v1
- Date: Sun, 01 Feb 2026 17:05:31 GMT
- Title: High-accuracy sampling for diffusion models and log-concave distributions
- Authors: Fan Chen, Sinho Chewi, Constantinos Daskalakis, Alexander Rakhlin,
- Abstract summary: We present algorithms for diffusion model sampling which obtain $$-error in $mathrmpolylog (1/)$ steps.<n>Our approach also yields the first $mathrmpolylog (1/)$ complexity sampler for general log-concave distributions.
- Score: 70.90863485771405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present algorithms for diffusion model sampling which obtain $δ$-error in $\mathrm{polylog}(1/δ)$ steps, given access to $\widetilde O(δ)$-accurate score estimates in $L^2$. This is an exponential improvement over all previous results. Specifically, under minimal data assumptions, the complexity is $\widetilde O(d\,\mathrm{polylog}(1/δ))$ where $d$ is the dimension of the data; under a non-uniform $L$-Lipschitz condition, the complexity is $\widetilde O(\sqrt{dL}\,\mathrm{polylog}(1/δ))$; and if the data distribution has intrinsic dimension $d_\star$, then the complexity reduces to $\widetilde O(d_\star\,\mathrm{polylog}(1/δ))$. Our approach also yields the first $\mathrm{polylog}(1/δ)$ complexity sampler for general log-concave distributions using only gradient evaluations.
Related papers
- Instance-optimal high-precision shadow tomography with few-copy measurements: A metrological approach [2.956729394666618]
We study the sample complexity of shadow tomography in the high-precision regime.<n>We use possibly adaptive measurements that act on $O(mathrmpolylog(d))$ number of copies of $$ at a time.
arXiv Detail & Related papers (2026-02-04T19:00:00Z) - Information-Computation Tradeoffs for Noiseless Linear Regression with Oblivious Contamination [65.37519531362157]
We show that any efficient Statistical Query algorithm for this task requires VSTAT complexity at least $tildeOmega(d1/2/alpha2)$.
arXiv Detail & Related papers (2025-10-12T15:42:44Z) - Improved sampling algorithms and Poincaré inequalities for non-log-concave distributions [2.959331374750817]
We show that the query complexity of sampling from $mu$ can be $mathrmpoly(L,d)cdot left(fracLMdepsilon2right)mathcalO(L+1)$, which is in $d$ and $frac1epsilon$ when $L=mathcalO(1)$ and $M=mathrmpoly(d)$.
arXiv Detail & Related papers (2025-07-15T12:06:11Z) - On the query complexity of sampling from non-log-concave distributions [2.4253233571593547]
We study the problem of sampling from a $d$-dimensional distribution with density $p(x)propto e-f(x)$, which does not necessarily satisfy good isoperimetric conditions.<n>We show that for a wide range of parameters, sampling is strictly easier than optimization by a super-exponential factor in the dimension $d$.
arXiv Detail & Related papers (2025-02-10T06:54:16Z) - Sampling from multi-modal distributions with polynomial query complexity in fixed dimension via reverse diffusion [16.463220658992064]
We provide the first sampling algorithm for a broad class of distributions.<n>Our algorithm simulates a time-reversed diffusion process.<n>It avoids metastability, requires no prior knowledge of the mode locations, and relaxes the well-known log-smoothness assumption.
arXiv Detail & Related papers (2024-12-31T17:51:39Z) - Statistical-Computational Trade-offs for Density Estimation [60.81548752871115]
We show that for a broad class of data structures their bounds cannot be significantly improved.
This is a novel emphstatistical-computational trade-off for density estimation.
arXiv Detail & Related papers (2024-10-30T15:03:33Z) - Iterative thresholding for non-linear learning in the strong $\varepsilon$-contamination model [3.309767076331365]
We derive approximation bounds for learning single neuron models using thresholded descent.
We also study the linear regression problem, where $sigma(mathbfx) = mathbfx$.
arXiv Detail & Related papers (2024-09-05T16:59:56Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$<n>We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ with a complexity that is not governed by information exponents.
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - List-Decodable Sparse Mean Estimation via Difference-of-Pairs Filtering [42.526664955704746]
We develop a novel, conceptually simpler technique for list-decodable sparse mean estimation.
In particular, for distributions with "certifiably bounded" $t-th moments in $k$-sparse directions, our algorithm achieves error of $(1/alpha)O (1/t)$ with sample complexity $m = (klog(n))O(t)/alpha(mnt)$.
For the special case of Gaussian inliers, our algorithm achieves the optimal error guarantee of $Theta (sqrtlog
arXiv Detail & Related papers (2022-06-10T17:38:18Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.