Benchmarking Energy-Conserving Neural Networks for Learning Dynamics
from Data
- URL: http://arxiv.org/abs/2012.02334v6
- Date: Fri, 28 Apr 2023 21:26:45 GMT
- Title: Benchmarking Energy-Conserving Neural Networks for Learning Dynamics
from Data
- Authors: Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
- Abstract summary: We survey ten recently proposed energy-conserving neural network models, including HNN, LNN, DeLaN, SymODEN, CHNN, CLNN and their variants.
We point out the possibility of leveraging some of these energy-conserving models to design energy-based controllers.
- Score: 9.811643357656196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The last few years have witnessed an increased interest in incorporating
physics-informed inductive bias in deep learning frameworks. In particular, a
growing volume of literature has been exploring ways to enforce energy
conservation while using neural networks for learning dynamics from observed
time-series data. In this work, we survey ten recently proposed
energy-conserving neural network models, including HNN, LNN, DeLaN, SymODEN,
CHNN, CLNN and their variants. We provide a compact derivation of the theory
behind these models and explain their similarities and differences. Their
performance are compared in 4 physical systems. We point out the possibility of
leveraging some of these energy-conserving models to design energy-based
controllers.
Related papers
- Deep State Space Recurrent Neural Networks for Time Series Forecasting [0.0]
This paper introduces novel neural network framework that blend the principles of econometric state space models with the dynamic capabilities of Recurrent Neural Networks (RNNs)
According to the results, TKANs, inspired by Kolmogorov-Arnold Networks (KANs) and LSTM, demonstrate promising outcomes.
arXiv Detail & Related papers (2024-07-21T17:59:27Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Neural Operators Meet Energy-based Theory: Operator Learning for
Hamiltonian and Dissipative PDEs [35.70739067374375]
This paper proposes Energy-consistent Neural Operators (ENOs) for learning solution operators of partial differential equations.
ENOs follows the energy conservation or dissipation law from observed solution trajectories.
We introduce a novel penalty function inspired by the energy-based theory of physics for training, in which the energy functional is modeled by another DNN.
arXiv Detail & Related papers (2024-02-14T08:50:14Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Unravelling the Performance of Physics-informed Graph Neural Networks
for Dynamical Systems [5.787429262238507]
We evaluate the performance of graph neural networks (GNNs) and their variants with explicit constraints and different architectures.
Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance.
All the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
arXiv Detail & Related papers (2022-11-10T12:29:30Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Learning Neural Hamiltonian Dynamics: A Methodological Overview [109.40968389896639]
Hamiltonian dynamics endows neural networks with accurate long-term prediction, interpretability, and data-efficient learning.
We systematically survey recently proposed Hamiltonian neural network models, with a special emphasis on methodologies.
arXiv Detail & Related papers (2022-02-28T22:54:39Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - Thermodynamics-based Artificial Neural Networks for constitutive
modeling [0.0]
We propose a new class of data-driven, physics-based, neural networks for modeling of strain rate independent processes at the material point level.
The two basic principles of thermodynamics are encoded in the network's architecture by taking advantage of automatic differentiation.
We demonstrate the wide applicability of TANNs for modeling elasto-plastic materials, with strain hardening and softening strain.
arXiv Detail & Related papers (2020-05-25T15:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.