Learning Flow Functions from Data with Applications to Nonlinear
Oscillators
- URL: http://arxiv.org/abs/2303.16656v2
- Date: Tue, 11 Apr 2023 13:07:21 GMT
- Title: Learning Flow Functions from Data with Applications to Nonlinear
Oscillators
- Authors: Miguel Aguiar, Amritam Das and Karl H. Johansson
- Abstract summary: We show that learning the flow function is equivalent to learning the input-to-state map of a discrete-time dynamical system.
This motivates the use of an RNN together with encoder and decoder networks which map the state of the system to the hidden state of the RNN and back.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We describe a recurrent neural network (RNN) based architecture to learn the
flow function of a causal, time-invariant and continuous-time control system
from trajectory data. By restricting the class of control inputs to piecewise
constant functions, we show that learning the flow function is equivalent to
learning the input-to-state map of a discrete-time dynamical system. This
motivates the use of an RNN together with encoder and decoder networks which
map the state of the system to the hidden state of the RNN and back. We show
that the proposed architecture is able to approximate the flow function by
exploiting the system's causality and time-invariance. The output of the
learned flow function model can be queried at any time instant. We
experimentally validate the proposed method using models of the Van der Pol and
FitzHugh Nagumo oscillators. In both cases, the results demonstrate that the
architecture is able to closely reproduce the trajectories of these two
systems. For the Van der Pol oscillator, we further show that the trained model
generalises to the system's response with a prolonged prediction time horizon
as well as control inputs outside the training distribution. For the
FitzHugh-Nagumo oscillator, we show that the model accurately captures the
input-dependent phenomena of excitability.
Related papers
- KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Modeling Unknown Stochastic Dynamical System via Autoencoder [3.8769921482808116]
We present a numerical method to learn an accurate predictive model for an unknown dynamical system from its trajectory data.
It employs the idea of autoencoder to identify the unobserved latent random variables.
It is also applicable to systems driven by non-Gaussian noises.
arXiv Detail & Related papers (2023-12-15T18:19:22Z) - Forecasting subcritical cylinder wakes with Fourier Neural Operators [58.68996255635669]
We apply a state-of-the-art operator learning technique to forecast the temporal evolution of experimentally measured velocity fields.
We find that FNOs are capable of accurately predicting the evolution of experimental velocity fields throughout the range of Reynolds numbers tested.
arXiv Detail & Related papers (2023-01-19T20:04:36Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Space-Time Graph Neural Networks [104.55175325870195]
We introduce space-time graph neural network (ST-GNN) to jointly process the underlying space-time topology of time-varying network data.
Our analysis shows that small variations in the network topology and time evolution of a system does not significantly affect the performance of ST-GNNs.
arXiv Detail & Related papers (2021-10-06T16:08:44Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Reconstructing a dynamical system and forecasting time series by
self-consistent deep learning [4.947248396489835]
We introduce a self-consistent deep-learning framework for a noisy deterministic time series.
It provides unsupervised filtering, state-space reconstruction, identification of the underlying differential equations and forecasting.
arXiv Detail & Related papers (2021-08-04T06:10:58Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.