Exact conservation laws for neural network integrators of dynamical
systems
- URL: http://arxiv.org/abs/2209.11661v2
- Date: Sun, 14 May 2023 12:36:33 GMT
- Title: Exact conservation laws for neural network integrators of dynamical
systems
- Authors: Eike Hermann M\"uller
- Abstract summary: We present an approach which uses Noether's Theorem to inherently incorporate conservation laws into the architecture of the neural network.
We demonstrate this leads to better predictions for three model systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The solution of time dependent differential equations with neural networks
has attracted a lot of attention recently. The central idea is to learn the
laws that govern the evolution of the solution from data, which might be
polluted with random noise. However, in contrast to other machine learning
applications, usually a lot is known about the system at hand. For example, for
many dynamical systems physical quantities such as energy or (angular) momentum
are exactly conserved. Hence, the neural network has to learn these
conservation laws from data and they will only be satisfied approximately due
to finite training time and random noise. In this paper we present an
alternative approach which uses Noether's Theorem to inherently incorporate
conservation laws into the architecture of the neural network. We demonstrate
that this leads to better predictions for three model systems: the motion of a
non-relativistic particle in a three-dimensional Newtonian gravitational
potential, the motion of a massive relativistic particle in the Schwarzschild
metric and a system of two interacting particles in four dimensions.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Neural network approach to quasiparticle dispersions in doped
antiferromagnets [0.0]
We study the ability of neural quantum states to represent the bosonic and fermionic $t-J$ model on different 1D and 2D lattices.
We present a method to calculate dispersion relations from the neural network state representation.
arXiv Detail & Related papers (2023-10-12T17:59:33Z) - Discovering Symbolic Laws Directly from Trajectories with Hamiltonian
Graph Neural Networks [5.824034325431987]
We present a Hamiltonian graph neural network (HGNN) that learns the dynamics of systems directly from their trajectory.
We demonstrate the performance of HGNN on n-springs, n-pendulums, gravitational systems, and binary Lennard Jones systems.
arXiv Detail & Related papers (2023-07-11T14:43:25Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Machine learning one-dimensional spinless trapped fermionic systems with
neural-network quantum states [1.6606527887256322]
We compute the ground-state properties of fully polarized, trapped, one-dimensional fermionic systems interacting through a gaussian potential.
We use an antisymmetric artificial neural network, or neural quantum state, as an ansatz for the wavefunction.
We find very different ground states depending on the sign of the interaction.
arXiv Detail & Related papers (2023-04-10T17:36:52Z) - Discrete Lagrangian Neural Networks with Automatic Symmetry Discovery [3.06483729892265]
We introduce a framework to learn a discrete Lagrangian along with its symmetry group from discrete observations of motions.
The learning process does not restrict the form of the Lagrangian, does not require velocity or momentum observations or predictions and incorporates a cost term.
arXiv Detail & Related papers (2022-11-20T00:46:33Z) - Unifying physical systems' inductive biases in neural ODE using dynamics
constraints [0.0]
We provide a simple method that could be applied to not just energy-conserving systems, but also dissipative systems.
The proposed method does not require changing the neural network architecture and could form the basis to validate a novel idea.
arXiv Detail & Related papers (2022-08-03T14:33:35Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Neural-Network Quantum States for Periodic Systems in Continuous Space [66.03977113919439]
We introduce a family of neural quantum states for the simulation of strongly interacting systems in the presence of periodicity.
For one-dimensional systems we find very precise estimations of the ground-state energies and the radial distribution functions of the particles.
In two dimensions we obtain good estimations of the ground-state energies, comparable to results obtained from more conventional methods.
arXiv Detail & Related papers (2021-12-22T15:27:30Z) - Parsimonious neural networks learn interpretable physical laws [77.34726150561087]
We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
arXiv Detail & Related papers (2020-05-08T16:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.