Emergent field theories from neural networks
- URL: http://arxiv.org/abs/2411.08138v1
- Date: Tue, 12 Nov 2024 19:30:18 GMT
- Title: Emergent field theories from neural networks
- Authors: Vitaly Vanchurin,
- Abstract summary: We show that the Hamilton-Jacobi equations for position and momentum variables correspond to the equations governing the activation dynamics of non-trainable variables.
The duality is then applied to model various field theories using the activation and learning dynamics of neural networks.
- Score: 0.0
- License:
- Abstract: We establish a duality relation between Hamiltonian systems and neural network-based learning systems. We show that the Hamilton-Jacobi equations for position and momentum variables correspond to the equations governing the activation dynamics of non-trainable variables and the learning dynamics of trainable variables. The duality is then applied to model various field theories using the activation and learning dynamics of neural networks. For Klein-Gordon fields, the corresponding weight tensor is symmetric, while for Dirac fields, the weight tensor must contain an anti-symmetric tensor factor. The dynamical components of the weight and bias tensors correspond, respectively, to the temporal and spatial components of the gauge field.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - SEGNO: Generalizing Equivariant Graph Neural Networks with Physical
Inductive Biases [66.61789780666727]
We show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property.
We also offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states.
Our model yields a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-08-25T07:15:58Z) - Pseudo-Hamiltonian neural networks for learning partial differential
equations [0.0]
Pseudo-Hamiltonian neural networks (PHNN) were recently introduced for learning dynamical systems that can be modelled by ordinary differential equations.
In this paper, we extend the method to partial differential equations.
The resulting model is comprised of up to three neural networks, modelling terms representing conservation, dissipation and external forces, and discrete convolution operators that can either be learned or be given as input.
arXiv Detail & Related papers (2023-04-27T17:46:00Z) - Hamiltonian Neural Networks with Automatic Symmetry Detection [0.0]
Hamiltonian neural networks (HNN) have been introduced to incorporate prior physical knowledge.
We enhance HNN with a Lie algebra framework to detect and embed symmetries in the neural network.
arXiv Detail & Related papers (2023-01-19T07:34:57Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Interrelation of equivariant Gaussian processes and convolutional neural
networks [77.34726150561087]
Currently there exists rather promising new trend in machine leaning (ML) based on the relationship between neural networks (NN) and Gaussian processes (GP)
In this work we establish a relationship between the many-channel limit for CNNs equivariant with respect to two-dimensional Euclidean group with vector-valued neuron activations and the corresponding independently introduced equivariant Gaussian processes (GP)
arXiv Detail & Related papers (2022-09-17T17:02:35Z) - A duality connecting neural network and cosmological dynamics [0.0]
We show that the dynamics of neural networks trained with gradient descent and the dynamics of scalar fields in a flat, vacuum energy dominated Universe are structurally related.
This duality provides the framework for synergies between these systems, to understand and explain neural network dynamics.
arXiv Detail & Related papers (2022-02-22T19:00:01Z) - Towards a theory of quantum gravity from neural networks [0.0]
We show that the non-equilibrium dynamics of trainable variables can be described by the Madelung equations.
We argue that the Lorentz symmetries and curved space-time can emerge from the interplay between entropy production and entropy destruction due to learning.
arXiv Detail & Related papers (2021-10-28T12:39:01Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Lagrangian Neural Network with Differential Symmetries and Relational
Inductive Bias [5.017136256232997]
We present a momentum conserving Lagrangian neural network (MCLNN) that learns the Lagrangian of a system.
We also show that the model developed can generalize to systems of any arbitrary size.
arXiv Detail & Related papers (2021-10-07T08:49:57Z) - Lorentz Group Equivariant Neural Network for Particle Physics [58.56031187968692]
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group.
For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters.
arXiv Detail & Related papers (2020-06-08T17:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.