Artificial neural network as a universal model of nonlinear dynamical
systems
- URL: http://arxiv.org/abs/2104.05402v1
- Date: Sat, 6 Mar 2021 16:02:41 GMT
- Title: Artificial neural network as a universal model of nonlinear dynamical
systems
- Authors: Pavel V. Kuptsov, Anna V. Kuptsova, Nataliya V. Stankevich
- Abstract summary: The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We suggest a universal map capable to recover a behavior of a wide range of
dynamical systems given by ODEs. The map is built as an artificial neural
network whose weights encode a modeled system. We assume that ODEs are known
and prepare training datasets using the equations directly without computing
numerical time series. Parameter variations are taken into account in the
course of training so that the network model captures bifurcation scenarios of
the modeled system. Theoretical benefit from this approach is that the
universal model admits using common mathematical methods without needing to
develop a unique theory for each particular dynamical equations. Form the
practical point of view the developed method can be considered as an
alternative numerical method for solving dynamical ODEs suitable for running on
contemporary neural network specific hardware. We consider the Lorenz system,
the Roessler system and also Hindmarch-Rose neuron. For these three examples
the network model is created and its dynamics is compared with ordinary
numerical solutions. High similarity is observed for visual images of
attractors, power spectra, bifurcation diagrams and Lyapunov exponents.
Related papers
- Learning Governing Equations of Unobserved States in Dynamical Systems [0.0]
We employ a hybrid neural ODE structure to learn governing equations of partially-observed dynamical systems.
We demonstrate that the method is capable of successfully learning the true underlying governing equations of unobserved states within these systems.
arXiv Detail & Related papers (2024-04-29T10:28:14Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Foundational Inference Models for Dynamical Systems [5.549794481031468]
We offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs.
We propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs.
We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values.
arXiv Detail & Related papers (2024-02-12T11:48:54Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Experimental study of Neural ODE training with adaptive solver for
dynamical systems modeling [72.84259710412293]
Some ODE solvers called adaptive can adapt their evaluation strategy depending on the complexity of the problem at hand.
This paper describes a simple set of experiments to show why adaptive solvers cannot be seamlessly leveraged as a black-box for dynamical systems modelling.
arXiv Detail & Related papers (2022-11-13T17:48:04Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Cubature Kalman Filter Based Training of Hybrid Differential Equation
Recurrent Neural Network Physiological Dynamic Models [13.637931956861758]
We show how we can approximate missing ordinary differential equations with known ODEs using a neural network approximation.
Results indicate that this RBSE approach to training the NN parameters yields better outcomes (measurement/state estimation accuracy) than training the neural network with backpropagation.
arXiv Detail & Related papers (2021-10-12T15:38:13Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.