Unifying physical systems' inductive biases in neural ODE using dynamics
constraints
- URL: http://arxiv.org/abs/2208.02632v1
- Date: Wed, 3 Aug 2022 14:33:35 GMT
- Title: Unifying physical systems' inductive biases in neural ODE using dynamics
constraints
- Authors: Yi Heng Lim, Muhammad Firmansyah Kasim
- Abstract summary: We provide a simple method that could be applied to not just energy-conserving systems, but also dissipative systems.
The proposed method does not require changing the neural network architecture and could form the basis to validate a novel idea.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conservation of energy is at the core of many physical phenomena and
dynamical systems. There have been a significant number of works in the past
few years aimed at predicting the trajectory of motion of dynamical systems
using neural networks while adhering to the law of conservation of energy. Most
of these works are inspired by classical mechanics such as Hamiltonian and
Lagrangian mechanics as well as Neural Ordinary Differential Equations. While
these works have been shown to work well in specific domains respectively,
there is a lack of a unifying method that is more generally applicable without
requiring significant changes to the neural network architectures. In this
work, we aim to address this issue by providing a simple method that could be
applied to not just energy-conserving systems, but also dissipative systems, by
including a different inductive bias in different cases in the form of a
regularisation term in the loss function. The proposed method does not require
changing the neural network architecture and could form the basis to validate a
novel idea, therefore showing promises to accelerate research in this
direction.
Related papers
- TANGO: Time-Reversal Latent GraphODE for Multi-Agent Dynamical Systems [43.39754726042369]
We propose a simple-yet-effective self-supervised regularization term as a soft constraint that aligns the forward and backward trajectories predicted by a continuous graph neural network-based ordinary differential equation (GraphODE)
It effectively imposes time-reversal symmetry to enable more accurate model predictions across a wider range of dynamical systems under classical mechanics.
Experimental results on a variety of physical systems demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-10-10T08:52:16Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Gradient-Enhanced Physics-Informed Neural Networks for Power Systems
Operational Support [36.96271320953622]
This paper introduces a machine learning method to approximate the behavior of power systems dynamics in near real time.
The proposed framework is based on gradient-enhanced physics-informed neural networks (gPINNs) and encodes the underlying physical laws governing power systems.
arXiv Detail & Related papers (2022-06-21T17:56:55Z) - Physics Informed RNN-DCT Networks for Time-Dependent Partial
Differential Equations [62.81701992551728]
We present a physics-informed framework for solving time-dependent partial differential equations.
Our model utilizes discrete cosine transforms to encode spatial and recurrent neural networks.
We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations.
arXiv Detail & Related papers (2022-02-24T20:46:52Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Port-Hamiltonian Neural Networks for Learning Explicit Time-Dependent
Dynamical Systems [2.6084034060847894]
Accurately learning the temporal behavior of dynamical systems requires models with well-chosen learning biases.
Recent innovations embed the Hamiltonian and Lagrangian formalisms into neural networks.
We show that the proposed emphport-Hamiltonian neural network can efficiently learn the dynamics of nonlinear physical systems of practical interest.
arXiv Detail & Related papers (2021-07-16T17:31:54Z) - Forced Variational Integrator Networks for Prediction and Control of
Mechanical Systems [7.538482310185133]
We show that forced variational integrator networks (FVIN) architecture allows us to accurately account for energy dissipation and external forcing.
This can result in highly-data efficient model-based control and can predict on real non-conservative systems.
arXiv Detail & Related papers (2021-06-05T21:39:09Z) - Approximation Bounds for Random Neural Networks and Reservoir Systems [8.143750358586072]
This work studies approximation based on single-hidden-layer feedforward and recurrent neural networks with randomly generated internal weights.
In particular, this proves that echo state networks with randomly generated weights are capable of approximating a wide class of dynamical systems arbitrarily well.
arXiv Detail & Related papers (2020-02-14T09:43:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.