Equivariant Transformer is all you need
- URL: http://arxiv.org/abs/2310.13222v1
- Date: Fri, 20 Oct 2023 01:57:03 GMT
- Title: Equivariant Transformer is all you need
- Authors: Akio Tomiya, Yuki Nagai
- Abstract summary: We introduce symmetry equivariant attention to self-learning Monte-Carlo.
We find that it overcomes poor acceptance rates for linear models and observe the scaling law of the acceptance rate as in the large language models with Transformers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning, deep learning, has been accelerating computational physics,
which has been used to simulate systems on a lattice. Equivariance is essential
to simulate a physical system because it imposes a strong induction bias for
the probability distribution described by a machine learning model. This
reduces the risk of erroneous extrapolation that deviates from data symmetries
and physical laws. However, imposing symmetry on the model sometimes occur a
poor acceptance rate in self-learning Monte-Carlo (SLMC). On the other hand,
Attention used in Transformers like GPT realizes a large model capacity. We
introduce symmetry equivariant attention to SLMC. To evaluate our architecture,
we apply it to our proposed new architecture on a spin-fermion model on a
two-dimensional lattice. We find that it overcomes poor acceptance rates for
linear models and observe the scaling law of the acceptance rate as in the
large language models with Transformers.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Probing the effects of broken symmetries in machine learning [0.0]
We show that non-symmetric models can learn symmetries from data, and that doing so can even be beneficial for the accuracy of the model.
We focus specifically on physical observables that are likely to be affected -- directly or indirectly -- by symmetry breaking, finding negligible consequences when the model is used in an interpolative, bulk, regime.
arXiv Detail & Related papers (2024-06-25T17:34:09Z) - Similarity Equivariant Graph Neural Networks for Homogenization of Metamaterials [3.6443770850509423]
Soft, porous mechanical metamaterials exhibit pattern transformations that may have important applications in soft robotics, sound reduction and biomedicine.
We develop a machine learning-based approach that scales favorably to serve as a surrogate model.
We show that this network is more accurate and data-efficient than graph neural networks with fewer symmetries.
arXiv Detail & Related papers (2024-04-26T12:30:32Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation [0.7614628596146602]
We propose a new approach that expands the model capacity without sacrificing the computational advantages of the VAE framework.
Our VAE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution.
We apply the proposed model to synthetic data generation, and particularly, our model demonstrates superiority in easily adjusting the level of data privacy.
arXiv Detail & Related papers (2023-02-22T11:26:50Z) - Lorentz group equivariant autoencoders [6.858459233149096]
Lorentz group autoencoder (LGAE)
We develop an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $mathrmSO+(2,1)$, with a latent space living in the representations of the group.
We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics.
arXiv Detail & Related papers (2022-12-14T17:19:46Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Automated Dissipation Control for Turbulence Simulation with Shell
Models [1.675857332621569]
The application of machine learning (ML) techniques, especially neural networks, has seen tremendous success at processing images and language.
In this work we construct a strongly simplified representation of turbulence by using the Gledzer-Ohkitani-Yamada shell model.
We propose an approach that aims to reconstruct statistical properties of turbulence such as the self-similar inertial-range scaling.
arXiv Detail & Related papers (2022-01-07T15:03:52Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.