Extended dynamic mode decomposition with dictionary learning using
neural ordinary differential equations
- URL: http://arxiv.org/abs/2110.01450v1
- Date: Fri, 1 Oct 2021 06:56:14 GMT
- Title: Extended dynamic mode decomposition with dictionary learning using
neural ordinary differential equations
- Authors: Hiroaki Terao, Sho Shirasaka and Hideyuki Suzuki
- Abstract summary: We propose an algorithm to perform extended dynamic mode decomposition using NODEs.
We show the superiority of the parameter efficiency of the proposed method through numerical experiments.
- Score: 0.8701566919381223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nonlinear phenomena can be analyzed via linear techniques using
operator-theoretic approaches. Data-driven method called the extended dynamic
mode decomposition (EDMD) and its variants, which approximate the Koopman
operator associated with the nonlinear phenomena, have been rapidly developing
by incorporating machine learning methods. Neural ordinary differential
equations (NODEs), which are a neural network equipped with a continuum of
layers, and have high parameter and memory efficiencies, have been proposed. In
this paper, we propose an algorithm to perform EDMD using NODEs. NODEs are used
to find a parameter-efficient dictionary which provides a good
finite-dimensional approximation of the Koopman operator. We show the
superiority of the parameter efficiency of the proposed method through
numerical experiments.
Related papers
- On the relationship between Koopman operator approximations and neural ordinary differential equations for data-driven time-evolution predictions [0.0]
We show that extended dynamic mode decomposition with dictionary learning (EDMD-DL) is equivalent to a neural network representation of the nonlinear discrete-time flow map on the state space.
We implement several variations of neural ordinary differential equations (ODEs) and EDMD-DL, developed by combining different aspects of their respective model structures and training procedures.
We evaluate these methods using numerical experiments on chaotic dynamics in the Lorenz system and a nine-mode model of turbulent shear flow.
arXiv Detail & Related papers (2024-11-20T00:18:46Z) - Accelerating Fractional PINNs using Operational Matrices of Derivative [0.24578723416255746]
This paper presents a novel operational matrix method to accelerate the training of fractional Physics-Informed Neural Networks (fPINNs)
Our approach involves a non-uniform discretization of the fractional Caputo operator, facilitating swift computation of fractional derivatives within Caputo-type fractional differential problems with $0alpha1$.
The effectiveness of our proposed method is validated across diverse differential equations, including Delay Differential Equations (DDEs) and Systems of Differential Algebraic Equations (DAEs)
arXiv Detail & Related papers (2024-01-25T11:00:19Z) - Neural Ordinary Differential Equations for Nonlinear System
Identification [0.9864260997723973]
We present a study comparing NODE's performance against neural state-space models and classical linear system identification methods.
Experiments show that NODEs can consistently improve the prediction accuracy by an order of magnitude compared to benchmark methods.
arXiv Detail & Related papers (2022-02-28T22:25:53Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Discovery of Nonlinear Dynamical Systems using a Runge-Kutta Inspired
Dictionary-based Sparse Regression Approach [9.36739413306697]
We blend machine learning and dictionary-based learning with numerical analysis tools to discover governing differential equations.
We obtain interpretable and parsimonious models which are prone to generalize better beyond the sampling regime.
We discuss its extension to governing equations, containing rational nonlinearities that typically appear in biological networks.
arXiv Detail & Related papers (2021-05-11T08:46:51Z) - Estimating Koopman operators for nonlinear dynamical systems: a
nonparametric approach [77.77696851397539]
The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems.
In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework.
We establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions.
arXiv Detail & Related papers (2021-03-25T11:08:26Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.