Learning Lattice Quantum Field Theories with Equivariant Continuous
Flows
- URL: http://arxiv.org/abs/2207.00283v3
- Date: Wed, 20 Dec 2023 14:15:49 GMT
- Title: Learning Lattice Quantum Field Theories with Equivariant Continuous
Flows
- Authors: Mathis Gerdes, Pim de Haan, Corrado Rainone, Roberto Bondesan, Miranda
C. N. Cheng
- Abstract summary: We propose a novel machine learning method for sampling from the high-dimensional probability distributions of Lattice Field Theories.
We test our model on the $phi4$ theory, showing that it systematically outperforms previously proposed flow-based methods in sampling efficiency.
- Score: 10.124564216461858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel machine learning method for sampling from the
high-dimensional probability distributions of Lattice Field Theories, which is
based on a single neural ODE layer and incorporates the full symmetries of the
problem. We test our model on the $\phi^4$ theory, showing that it
systematically outperforms previously proposed flow-based methods in sampling
efficiency, and the improvement is especially pronounced for larger lattices.
Furthermore, we demonstrate that our model can learn a continuous family of
theories at once, and the results of learning can be transferred to larger
lattices. Such generalizations further accentuate the advantages of machine
learning methods.
Related papers
- Learning and Verifying Maximal Taylor-Neural Lyapunov functions [0.4910937238451484]
We introduce a novel neural network architecture, termed Taylor-neural Lyapunov functions.
This architecture encodes local approximations and extends them globally by leveraging neural networks to approximate the residuals.
This work represents a significant advancement in control theory, with broad potential applications in the design of stable control systems and beyond.
arXiv Detail & Related papers (2024-08-30T12:40:12Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Gaussian Universality in Neural Network Dynamics with Generalized Structured Input Distributions [2.3020018305241337]
We analyze the behavior of a deep learning system trained on inputs modeled as Gaussian mixtures to better simulate more general structured inputs.
Under certain standardization schemes, the deep learning model converges toward Gaussian setting behavior, even when the input data follow more complex or real-world distributions.
arXiv Detail & Related papers (2024-05-01T17:10:55Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Deep Stochastic Mechanics [17.598067133568062]
This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schr"odinger equation.
Our method allows us to adapt to the latent low-dimensional structure of the wave function by sampling from the Markovian diffusion.
arXiv Detail & Related papers (2023-05-31T09:28:03Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Aspects of scaling and scalability for flow-based sampling of lattice
QCD [137.23107300589385]
Recent applications of machine-learned normalizing flows to sampling in lattice field theory suggest that such methods may be able to mitigate critical slowing down and topological freezing.
It remains to be determined whether they can be applied to state-of-the-art lattice quantum chromodynamics calculations.
arXiv Detail & Related papers (2022-11-14T17:07:37Z) - An optimal control perspective on diffusion-based generative modeling [9.806130366152194]
We establish a connection between optimal control and generative models based on differential equations (SDEs)
In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals.
We develop a novel diffusion-based method for sampling from unnormalized densities.
arXiv Detail & Related papers (2022-11-02T17:59:09Z) - Stochastic normalizing flows as non-equilibrium transformations [62.997667081978825]
We show that normalizing flows provide a route to sample lattice field theories more efficiently than conventional MonteCarlo simulations.
We lay out a strategy to optimize the efficiency of this extended class of generative models and present examples of applications.
arXiv Detail & Related papers (2022-01-21T19:00:18Z) - Flow-based sampling for fermionic lattice field theories [8.46509435333566]
This work develops approaches that enable flow-based sampling of theories with dynamical fermions.
As a practical demonstration, these methods are applied to the sampling of field configurations for a two-dimensional theory of massless staggered fermions.
arXiv Detail & Related papers (2021-06-10T17:32:47Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.