dynoNet: a neural network architecture for learning dynamical systems
- URL: http://arxiv.org/abs/2006.02250v2
- Date: Tue, 20 Apr 2021 07:58:57 GMT
- Title: dynoNet: a neural network architecture for learning dynamical systems
- Authors: Marco Forgione, Dario Piga
- Abstract summary: This paper introduces a network architecture, called dynoNet, utilizing linear dynamical operators as elementary building blocks.
The back-propagation behavior of the linear dynamical operator with respect to both its parameters and its input sequence is defined.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a network architecture, called dynoNet, utilizing
linear dynamical operators as elementary building blocks. Owing to the
dynamical nature of these blocks, dynoNet networks are tailored for sequence
modeling and system identification purposes. The back-propagation behavior of
the linear dynamical operator with respect to both its parameters and its input
sequence is defined. This enables end-to-end training of structured networks
containing linear dynamical operators and other differentiable units,
exploiting existing deep learning software. Examples show the effectiveness of
the proposed approach on well-known system identification benchmarks.
Examples show the effectiveness of the proposed approach against well-known
system identification benchmarks.
Related papers
- OS-net: Orbitally Stable Neural Networks [0.0]
We introduce OS-net, a new family of neural network architectures specifically designed for periodic dynamical data.
We derive conditions on the network weights to ensure stability of the resulting dynamics.
We demonstrate the efficacy of our approach by applying OS-net to discover the dynamics underlying the R"ossler and Sprott's systems.
arXiv Detail & Related papers (2023-09-26T10:40:04Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Learning Linear Embeddings for Non-Linear Network Dynamics with Koopman
Message Passing [0.0]
We present a novel approach based on Koopman operator theory and message passing networks.
We find a linear representation for the dynamical system which is globally valid at any time step.
The linearisations found by our method produce predictions on a suite of network dynamics problems that are several orders of magnitude better than current state-of-the-art techniques.
arXiv Detail & Related papers (2023-05-15T23:00:25Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Vanilla Feedforward Neural Networks as a Discretization of Dynamical Systems [9.382423715831687]
In this paper, we back to the classical network structure and prove that the vanilla feedforward networks could also be a numerical discretization of dynamic systems.
Our results could provide a new perspective for understanding the approximation properties of feedforward neural networks.
arXiv Detail & Related papers (2022-09-22T10:32:08Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Deep learning with transfer functions: new applications in system
identification [0.0]
This paper presents a linear dynamical operator endowed with a well-defined and efficient back-propagation behavior for automatic derivatives computation.
The operator enables end-to-end training of structured networks containing linear transfer functions and other differentiable units.
arXiv Detail & Related papers (2021-04-20T08:58:55Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Input-to-State Representation in linear reservoirs dynamics [15.491286626948881]
Reservoir computing is a popular approach to design recurrent neural networks.
The working principle of these networks is not fully understood.
A novel analysis of the dynamics of such networks is proposed.
arXiv Detail & Related papers (2020-03-24T00:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.