On Error Propagation of Diffusion Models
- URL: http://arxiv.org/abs/2308.05021v3
- Date: Thu, 18 Jan 2024 18:18:59 GMT
- Title: On Error Propagation of Diffusion Models
- Authors: Yangming Li, Mihaela van der Schaar
- Abstract summary: We develop a theoretical framework to mathematically formulate error propagation in the architecture of DMs.
We apply the cumulative error as a regularization term to reduce error propagation.
Our proposed regularization reduces error propagation, significantly improves vanilla DMs, and outperforms previous baselines.
- Score: 77.91480554418048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although diffusion models (DMs) have shown promising performances in a number
of tasks (e.g., speech synthesis and image generation), they might suffer from
error propagation because of their sequential structure. However, this is not
certain because some sequential models, such as Conditional Random Field (CRF),
are free from this problem. To address this issue, we develop a theoretical
framework to mathematically formulate error propagation in the architecture of
DMs, The framework contains three elements, including modular error, cumulative
error, and propagation equation. The modular and cumulative errors are related
by the equation, which interprets that DMs are indeed affected by error
propagation. Our theoretical study also suggests that the cumulative error is
closely related to the generation quality of DMs. Based on this finding, we
apply the cumulative error as a regularization term to reduce error
propagation. Because the term is computationally intractable, we derive its
upper bound and design a bootstrap algorithm to efficiently estimate the bound
for optimization. We have conducted extensive experiments on multiple image
datasets, showing that our proposed regularization reduces error propagation,
significantly improves vanilla DMs, and outperforms previous baselines.
Related papers
- Mitigating Embedding Collapse in Diffusion Models for Categorical Data [52.90687881724333]
We introduce CATDM, a continuous diffusion framework within the embedding space that stabilizes training.
Experiments on benchmarks show that CATDM mitigates embedding collapse, yielding superior results on FFHQ, LSUN Churches, and LSUN Bedrooms.
arXiv Detail & Related papers (2024-10-18T09:12:33Z) - Neural Approximate Mirror Maps for Constrained Diffusion Models [6.776705170481944]
Diffusion models excel at creating visually-convincing images, but they often struggle to meet subtle constraints inherent in the training data.
We propose neural approximate mirror maps (NAMMs) for general constraints.
A generative model, such as an MDM, can then be trained in the learned mirror space and its samples restored to the constraint set by the inverse map.
arXiv Detail & Related papers (2024-06-18T17:36:09Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - A note on the error analysis of data-driven closure models for large eddy simulations of turbulence [2.4548283109365436]
We provide a mathematical formulation for error propagation in flow trajectory prediction using data-driven turbulence closure modeling.
We retrieve an upper bound for the prediction error when utilizing a data-driven closure model.
Our analysis also shows that the error propagates exponentially with rollout time and the upper bound of the system Jacobian.
arXiv Detail & Related papers (2024-05-27T19:20:22Z) - Diffusion models for Gaussian distributions: Exact solutions and Wasserstein errors [0.0]
Diffusion or score-based models recently showed high performance in image generation.
We study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian.
arXiv Detail & Related papers (2024-05-23T07:28:56Z) - Exploring the Optimal Choice for Generative Processes in Diffusion
Models: Ordinary vs Stochastic Differential Equations [6.2284442126065525]
We study the problem mathematically for two limiting scenarios: the zero diffusion (ODE) case and the large diffusion case.
Our findings indicate that when the perturbation occurs at the end of the generative process, the ODE model outperforms the SDE model with a large diffusion coefficient.
arXiv Detail & Related papers (2023-06-03T09:27:15Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.