Deep Learning Hamiltonian Monte Carlo
- URL: http://arxiv.org/abs/2105.03418v1
- Date: Fri, 7 May 2021 17:50:18 GMT
- Title: Deep Learning Hamiltonian Monte Carlo
- Authors: Sam Foreman, Xiao-Yong Jin, and James C. Osborn
- Abstract summary: We generalize the Hamiltonian Monte Carlo algorithm with a stack of neural network layers.
We demonstrate that our model is able to successfully mix between modes of different topologies.
- Score: 0.6554326244334867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We generalize the Hamiltonian Monte Carlo algorithm with a stack of neural
network layers and evaluate its ability to sample from different topologies in
a two dimensional lattice gauge theory. We demonstrate that our model is able
to successfully mix between modes of different topologies, significantly
reducing the computational cost required to generated independent gauge field
configurations. Our implementation is available at
https://github.com/saforem2/l2hmc-qcd .
Related papers
- Flow-based Sampling for Entanglement Entropy and the Machine Learning of Defects [38.18440341418837]
We introduce a novel technique to numerically calculate R'enyi entanglement entropies in lattice quantum field theory using generative models.
We describe how flow-based approaches can be combined with the replica trick using a custom neural-network architecture around a lattice defect connecting two replicas.
arXiv Detail & Related papers (2024-10-18T13:51:25Z) - From {\tt Ferminet} to PINN. Connections between neural network-based algorithms for high-dimensional Schrödinger Hamiltonian [0.0]
In particular, we re-formulate a PINN algorithm as a it fitting problem with data corresponding to the solution to a standard Monte Carlo algorithm.
Connections at the level of the optimization algorithms are also established.
arXiv Detail & Related papers (2024-10-11T18:27:58Z) - A Deep Dive into the Connections Between the Renormalization Group and
Deep Learning in the Ising Model [0.0]
Renormalization group (RG) is an essential technique in statistical physics and quantum field theory.
We develop extensive renormalization techniques for the 1D and 2D Ising model to provide a baseline for comparison.
For the 2D Ising model, we successfully generated Ising model samples using the Wolff algorithm, and performed the group flow using a quasi-deterministic method.
arXiv Detail & Related papers (2023-08-21T22:50:54Z) - Representation Learning via Manifold Flattening and Reconstruction [10.823557517341964]
This work proposes an algorithm for explicitly constructing a pair of neural networks that linearize and reconstruct an embedded submanifold.
Our such-generated neural networks, called Flattening Networks (FlatNet), are theoretically interpretable, computationally feasible at scale, and generalize well to test data.
arXiv Detail & Related papers (2023-05-02T20:36:34Z) - Stochastic normalizing flows as non-equilibrium transformations [62.997667081978825]
We show that normalizing flows provide a route to sample lattice field theories more efficiently than conventional MonteCarlo simulations.
We lay out a strategy to optimize the efficiency of this extended class of generative models and present examples of applications.
arXiv Detail & Related papers (2022-01-21T19:00:18Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - LeapfrogLayers: A Trainable Framework for Effective Topological Sampling [0.7366405857677227]
We introduce Leapfrogs, an invertible neural network architecture that can be trained to efficiently sample the topology of a 2D $U(1)$ lattice gauge theory.
We show an improvement in the integrated autocorrelation time of the topological charge when compared with traditional HMC, and propose methods for scaling our model to larger lattice volumes.
arXiv Detail & Related papers (2021-12-02T19:48:16Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Generalize a Small Pre-trained Model to Arbitrarily Large TSP Instances [55.64521598173897]
This paper tries to train a small-scale model, which could be repetitively used to build heat maps for the traveling salesman problem (TSP)
Heat maps are fed into a reinforcement learning approach (Monte Carlo tree search) to guide the search of high-quality solutions.
Experimental results show that, this new approach clearly outperforms the existing machine learning based TSP algorithms.
arXiv Detail & Related papers (2020-12-19T11:06:30Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.