Convergence of denoising diffusion models under the manifold hypothesis
- URL: http://arxiv.org/abs/2208.05314v2
- Date: Mon, 29 May 2023 11:12:13 GMT
- Title: Convergence of denoising diffusion models under the manifold hypothesis
- Authors: Valentin De Bortoli
- Abstract summary: Denoising diffusion models are a recent class of generative models exhibiting state-of-the-art performance in image and audio synthesis.
This paper provides the first convergence results for diffusion models in a more general setting.
- Score: 3.096615629099617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Denoising diffusion models are a recent class of generative models exhibiting
state-of-the-art performance in image and audio synthesis. Such models
approximate the time-reversal of a forward noising process from a target
distribution to a reference density, which is usually Gaussian. Despite their
strong empirical results, the theoretical analysis of such models remains
limited. In particular, all current approaches crucially assume that the target
density admits a density w.r.t. the Lebesgue measure. This does not cover
settings where the target distribution is supported on a lower-dimensional
manifold or is given by some empirical distribution. In this paper, we bridge
this gap by providing the first convergence results for diffusion models in
this more general setting. In particular, we provide quantitative bounds on the
Wasserstein distance of order one between the target data distribution and the
generative distribution of the diffusion model.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - How Discrete and Continuous Diffusion Meet: Comprehensive Analysis of Discrete Diffusion Models via a Stochastic Integral Framework [11.71206628091551]
We propose a comprehensive framework for the error analysis of discrete diffusion models based on L'evy-type integrals.
Our framework unifies and strengthens the current theoretical results on discrete diffusion models.
arXiv Detail & Related papers (2024-10-04T16:59:29Z) - Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Unraveling the Smoothness Properties of Diffusion Models: A Gaussian Mixture Perspective [18.331374727331077]
We provide a theoretical understanding of the Lipschitz continuity and second momentum properties of the diffusion process.
Our results provide deeper theoretical insights into the dynamics of the diffusion process under common data distributions.
arXiv Detail & Related papers (2024-05-26T03:32:27Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - On the Generalization Properties of Diffusion Models [33.93850788633184]
This work embarks on a comprehensive theoretical exploration of the generalization attributes of diffusion models.
We establish theoretical estimates of the generalization gap that evolves in tandem with the training dynamics of score-based diffusion models.
We extend our quantitative analysis to a data-dependent scenario, wherein target distributions are portrayed as a succession of densities.
arXiv Detail & Related papers (2023-11-03T09:20:20Z) - Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion
Models [76.46246743508651]
We show that current diffusion models actually have an expressive bottleneck in backward denoising.
We introduce soft mixture denoising (SMD), an expressive and efficient model for backward denoising.
arXiv Detail & Related papers (2023-09-25T12:03:32Z) - Diffusion Models are Minimax Optimal Distribution Estimators [49.47503258639454]
We provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling.
We show that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates.
arXiv Detail & Related papers (2023-03-03T11:31:55Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.