Generative Diffusion Models for Lattice Field Theory
- URL: http://arxiv.org/abs/2311.03578v1
- Date: Mon, 6 Nov 2023 22:24:28 GMT
- Title: Generative Diffusion Models for Lattice Field Theory
- Authors: Lingxiao Wang, Gert Aarts and Kai Zhou
- Abstract summary: This study delves into the connection between machine learning and lattice field theory by linking generative diffusion models (DMs) with quantization.
We show that DMs can be conceptualized by reversing a process driven by the Langevin equation, which then produces samples from an initial distribution to approximate the target distribution.
- Score: 8.116039964888353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study delves into the connection between machine learning and lattice
field theory by linking generative diffusion models (DMs) with stochastic
quantization, from a stochastic differential equation perspective. We show that
DMs can be conceptualized by reversing a stochastic process driven by the
Langevin equation, which then produces samples from an initial distribution to
approximate the target distribution. In a toy model, we highlight the
capability of DMs to learn effective actions. Furthermore, we demonstrate its
feasibility to act as a global sampler for generating configurations in the
two-dimensional $\phi^4$ quantum lattice field theory.
Related papers
- Bellman Diffusion: Generative Modeling as Learning a Linear Operator in the Distribution Space [72.52365911990935]
We introduce Bellman Diffusion, a novel DGM framework that maintains linearity in MDPs through gradient and scalar field modeling.
Our results show that Bellman Diffusion achieves accurate field estimations and is a capable image generator, converging 1.5x faster than the traditional histogram-based baseline in distributional RL tasks.
arXiv Detail & Related papers (2024-10-02T17:53:23Z) - Discrete generative diffusion models without stochastic differential equations: a tensor network approach [1.5839621757142595]
Diffusion models (DMs) are a class of generative machine learning methods.
We show how to use networks (TNs) to efficiently define and sample such discrete models''
arXiv Detail & Related papers (2024-07-15T18:00:11Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Convergence Analysis of Discrete Diffusion Model: Exact Implementation
through Uniformization [17.535229185525353]
We introduce an algorithm leveraging the uniformization of continuous Markov chains, implementing transitions on random time points.
Our results align with state-of-the-art achievements for diffusion models in $mathbbRd$ and further underscore the advantages of discrete diffusion models in comparison to the $mathbbRd$ setting.
arXiv Detail & Related papers (2024-02-12T22:26:52Z) - Diffusion Models as Stochastic Quantization in Lattice Field Theory [7.221319972004889]
We establish a direct connection between generative diffusion models (DMs) and quantization (SQ).
The DM is realized by approximating the reversal of a process dictated by the Langevin equation, generating samples from a prior distribution to effectively mimic the target distribution.
We demonstrate that DMs can notably reduce autocorrelation times in the Markov chain, especially in the critical region where standard Markov Chain Monte-Carlo algorithms experience critical slowing down.
arXiv Detail & Related papers (2023-09-29T09:26:59Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - An optimal control perspective on diffusion-based generative modeling [9.806130366152194]
We establish a connection between optimal control and generative models based on differential equations (SDEs)
In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals.
We develop a novel diffusion-based method for sampling from unnormalized densities.
arXiv Detail & Related papers (2022-11-02T17:59:09Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - Super-model ecosystem: A domain-adaptation perspective [101.76769818069072]
This paper attempts to establish the theoretical foundation for the emerging super-model paradigm via domain adaptation.
Super-model paradigms help reduce computational and data cost and carbon emission, which is critical to AI industry.
arXiv Detail & Related papers (2022-08-30T09:09:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.