Realization Theory Of Recurrent Neural ODEs Using Polynomial System
Embeddings
- URL: http://arxiv.org/abs/2205.11989v1
- Date: Tue, 24 May 2022 11:36:18 GMT
- Title: Realization Theory Of Recurrent Neural ODEs Using Polynomial System
Embeddings
- Authors: Martin Gonzalez, Thibault Defourneau, Hatem Hajri, Mihaly Petreczky
- Abstract summary: We show that neural ODE analogs of recurrent (RNN) and LongTerm Memory (ODE-LSTM) networks can be algorithmically embedded into the class of systems.
This embedding input-output behavior and can be extended to other DE-LSTM architectures.
We then use realization theory of systems to provide necessary conditions for an input-output to be realizable by an ODE-LSTM and sufficient conditions for minimality of such systems.
- Score: 0.802904964931021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we show that neural ODE analogs of recurrent (ODE-RNN) and Long
Short-Term Memory (ODE-LSTM) networks can be algorithmically embeddeded into
the class of polynomial systems. This embedding preserves input-output behavior
and can suitably be extended to other neural DE architectures. We then use
realization theory of polynomial systems to provide necessary conditions for an
input-output map to be realizable by an ODE-LSTM and sufficient conditions for
minimality of such systems. These results represent the first steps towards
realization theory of recurrent neural ODE architectures, which is is expected
be useful for model reduction and learning algorithm analysis of recurrent
neural ODEs.
Related papers
- Embedding Capabilities of Neural ODEs [0.0]
We study input-output relations of neural ODEs using dynamical systems theory.
We prove several results about the exact embedding of maps in different neural ODE architectures in low and high dimension.
arXiv Detail & Related papers (2023-08-02T15:16:34Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Neural Generalized Ordinary Differential Equations with Layer-varying
Parameters [1.3691539554014036]
We show that the layer-varying Neural-GODE is more flexible and general than the standard Neural-ODE.
The Neural-GODE enjoys the computational and memory benefits while performing comparably to ResNets in prediction accuracy.
arXiv Detail & Related papers (2022-09-21T20:02:28Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Go with the Flow: Adaptive Control for Neural ODEs [10.265713480189484]
We describe a new module called neurally controlled ODE (N-CODE) designed to improve the expressivity of NODEs.
N-CODE modules are dynamic variables governed by a trainable map from initial or current activation state.
A single module is sufficient for learning a distribution on non-autonomous flows that adaptively drive neural representations.
arXiv Detail & Related papers (2020-06-16T22:21:15Z) - Neural Ordinary Differential Equation based Recurrent Neural Network
Model [0.7233897166339269]
differential equations are a promising new member in the neural network family.
This paper explores the strength of the ordinary differential equation (ODE) is explored with a new extension.
Two new ODE-based RNN models (GRU-ODE model and LSTM-ODE) can compute the hidden state and cell state at any point of time using an ODE solver.
Experiments show that these new ODE based RNN models require less training time than Latent ODEs and conventional Neural ODEs.
arXiv Detail & Related papers (2020-05-20T01:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.