Constants of motion network
- URL: http://arxiv.org/abs/2208.10387v2
- Date: Tue, 23 Aug 2022 00:52:04 GMT
- Title: Constants of motion network
- Authors: Muhammad Firmansyah Kasim, Yi Heng Lim
- Abstract summary: We present a neural network that can simultaneously learn the dynamics of the system and the constants of motion from data.
By exploiting the discovered constants of motion, it can produce better predictions on dynamics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The beauty of physics is that there is usually a conserved quantity in an
always-changing system, known as the constant of motion. Finding the constant
of motion is important in understanding the dynamics of the system, but
typically requires mathematical proficiency and manual analytical work. In this
paper, we present a neural network that can simultaneously learn the dynamics
of the system and the constants of motion from data. By exploiting the
discovered constants of motion, it can produce better predictions on dynamics
and can work on a wider range of systems than Hamiltonian-based neural
networks. In addition, the training progresses of our method can be used as an
indication of the number of constants of motion in a system which could be
useful in studying a novel physical system.
Related papers
- Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Unifying physical systems' inductive biases in neural ODE using dynamics
constraints [0.0]
We provide a simple method that could be applied to not just energy-conserving systems, but also dissipative systems.
The proposed method does not require changing the neural network architecture and could form the basis to validate a novel idea.
arXiv Detail & Related papers (2022-08-03T14:33:35Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent
Dynamical Systems [2.6084034060847894]
Accurately learning the temporal behavior of dynamical systems requires models with well-chosen learning biases.
Recent innovations embed the Hamiltonian and Lagrangian formalisms into neural networks.
We show that the proposed emphport-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest.
arXiv Detail & Related papers (2021-07-16T17:31:54Z) - Learning Dynamical Systems from Noisy Sensor Measurements using Multiple
Shooting [11.771843031752269]
We introduce a generic and scalable method to learn latent representations of indirectly observed dynamical systems.
We achieve state-of-the-art performances on systems observed directly from raw images.
arXiv Detail & Related papers (2021-06-22T12:30:18Z) - Deep Learning of Quantum Many-Body Dynamics via Random Driving [0.0]
We show the power of deep learning to predict the dynamics of a quantum many-body system.
We show the network is able to extrapolate the dynamics to times longer than those it has been trained on.
arXiv Detail & Related papers (2021-05-01T22:46:42Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Learning Continuous System Dynamics from Irregularly-Sampled Partial
Observations [33.63818978256567]
We present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure.
It can simultaneously learn the embedding of high dimensional trajectories and infer continuous latent system dynamics.
Our model employs a novel encoder parameterized by a graph neural network that can infer initial states in an unsupervised way.
arXiv Detail & Related papers (2020-11-08T01:02:22Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.