Generating configurations of increasing lattice size with machine learning and the inverse renormalization group
- URL: http://arxiv.org/abs/2405.16288v1
- Date: Sat, 25 May 2024 16:00:37 GMT
- Title: Generating configurations of increasing lattice size with machine learning and the inverse renormalization group
- Authors: Dimitrios Bachtis,
- Abstract summary: Inverse renormalization group methods enable the iterative generation of configurations for increasing lattice size without the critical slowing down effect.
We present applications in models of statistical mechanics, lattice field theory, and disordered systems.
We highlight the case of the three-dimensional Edwards-Anderson spin glass, where the inverse renormalization group can be employed to construct configurations for lattice volumes that have not yet been accessed by dedicated supercomputers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We review recent developments of machine learning algorithms pertinent to the inverse renormalization group, which was originally established as a generative numerical method by Ron-Swendsen-Brandt via the implementation of compatible Monte Carlo simulations. Inverse renormalization group methods enable the iterative generation of configurations for increasing lattice size without the critical slowing down effect. We discuss the construction of inverse renormalization group transformations with the use of convolutional neural networks and present applications in models of statistical mechanics, lattice field theory, and disordered systems. We highlight the case of the three-dimensional Edwards-Anderson spin glass, where the inverse renormalization group can be employed to construct configurations for lattice volumes that have not yet been accessed by dedicated supercomputers.
Related papers
- Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning
Approach [14.536819369925398]
We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases.
arXiv Detail & Related papers (2023-11-14T13:54:35Z) - Inverse renormalization group of spin glasses [0.0]
We propose inverse renormalization group transformations to construct approximate configurations for lattice volumes that have not yet been accessed by supercomputers or large-scale simulations in the study of spin glasses.
We employ machine learning algorithms to construct rescaled lattices up to $V'=1283$, which we utilize to extract two critical exponents.
arXiv Detail & Related papers (2023-10-19T10:35:41Z) - The Decimation Scheme for Symmetric Matrix Factorization [0.0]
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications.
We study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced.
We introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization.
arXiv Detail & Related papers (2023-07-31T10:53:45Z) - Geometric Clifford Algebra Networks [53.456211342585824]
We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical systems.
GCANs are based on symmetry group transformations using geometric (Clifford) algebras.
arXiv Detail & Related papers (2023-02-13T18:48:33Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Stochastic normalizing flows as non-equilibrium transformations [62.997667081978825]
We show that normalizing flows provide a route to sample lattice field theories more efficiently than conventional MonteCarlo simulations.
We lay out a strategy to optimize the efficiency of this extended class of generative models and present examples of applications.
arXiv Detail & Related papers (2022-01-21T19:00:18Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - On the convergence of group-sparse autoencoders [9.393652136001732]
We introduce and study a group-sparse autoencoder that accounts for a variety of generative models.
For clustering models, inputs that result in the same group of active units belong to the same cluster.
In this setting, we theoretically prove the convergence of the network parameters to a neighborhood of the generating matrix.
arXiv Detail & Related papers (2021-02-13T21:17:07Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Granular Computing: An Augmented Scheme of Degranulation Through a
Modified Partition Matrix [86.89353217469754]
Information granules forming an abstract and efficient characterization of large volumes of numeric data have been considered as the fundamental constructs of Granular Computing.
Previous studies have shown that there is a relationship between the reconstruction error and the performance of the granulation process.
To enhance the quality of degranulation, in this study, we develop an augmented scheme through modifying the partition matrix.
arXiv Detail & Related papers (2020-04-03T03:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.