Dissipative Deep Neural Dynamical Systems
- URL: http://arxiv.org/abs/2011.13492v3
- Date: Wed, 8 Jun 2022 16:03:21 GMT
- Title: Dissipative Deep Neural Dynamical Systems
- Authors: Jan Drgona, Soumya Vasisht, Aaron Tuor, Draguna Vrabie
- Abstract summary: We leverage the representation of neural networks as pointwise affine maps to expose their local linear operators.
This allows us to "crack open the black box" of the neural dynamical system's behavior.
We analyze the variance in dynamical behavior and eigenvalue spectra of these local linear operators with varying weight factorizations, activation functions, bias terms, and depths.
- Score: 0.9864260997723973
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we provide sufficient conditions for dissipativity and local
asymptotic stability of discrete-time dynamical systems parametrized by deep
neural networks. We leverage the representation of neural networks as pointwise
affine maps, thus exposing their local linear operators and making them
accessible to classical system analytic and design methods. This allows us to
"crack open the black box" of the neural dynamical system's behavior by
evaluating their dissipativity, and estimating their stationary points and
state-space partitioning. We relate the norms of these local linear operators
to the energy stored in the dissipative system with supply rates represented by
their aggregate bias terms. Empirically, we analyze the variance in dynamical
behavior and eigenvalue spectra of these local linear operators with varying
weight factorizations, activation functions, bias terms, and depths.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Constraining Chaos: Enforcing dynamical invariants in the training of
recurrent neural networks [0.0]
We introduce a novel training method for machine learning based forecasting methods for chaotic dynamical systems.
The training enforces dynamical invariants--such as the Lyapunov exponent spectrum and fractal dimension--in the systems of interest, enabling longer and more stable forecasts when operating with limited data.
arXiv Detail & Related papers (2023-04-24T00:33:47Z) - Initial Correlations in Open Quantum Systems: Constructing Linear
Dynamical Maps and Master Equations [62.997667081978825]
We show that, for any predetermined initial correlations, one can introduce a linear dynamical map on the space of operators of the open system.
We demonstrate that this construction leads to a linear, time-local quantum master equation with generalized Lindblad structure.
arXiv Detail & Related papers (2022-10-24T13:43:04Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - Locally-symplectic neural networks for learning volume-preserving
dynamics [0.0]
We propose locally-symplectic neural networks LocSympNets for learning volume-preserving dynamics.
The construction of LocSympNets stems from the theorem of local Hamiltonian description of the vector field of a volume-preserving dynamical system.
arXiv Detail & Related papers (2021-09-19T15:58:09Z) - Supervised DKRC with Images for Offline System Identification [77.34726150561087]
Modern dynamical systems are becoming increasingly non-linear and complex.
There is a need for a framework to model these systems in a compact and comprehensive representation for prediction and control.
Our approach learns these basis functions using a supervised learning approach.
arXiv Detail & Related papers (2021-09-06T04:39:06Z) - Learning strange attractors with reservoir systems [8.201100713224003]
This paper shows that the celebrated Embedding Theorem of Takens is a particular case of a much more general statement.
It provides additional tools for the representation, learning, and analysis of chaotic attractors.
arXiv Detail & Related papers (2021-08-11T04:29:18Z) - Reconstructing a dynamical system and forecasting time series by
self-consistent deep learning [4.947248396489835]
We introduce a self-consistent deep-learning framework for a noisy deterministic time series.
It provides unsupervised filtering, state-space reconstruction, identification of the underlying differential equations and forecasting.
arXiv Detail & Related papers (2021-08-04T06:10:58Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Input-to-State Representation in linear reservoirs dynamics [15.491286626948881]
Reservoir computing is a popular approach to design recurrent neural networks.
The working principle of these networks is not fully understood.
A novel analysis of the dynamics of such networks is proposed.
arXiv Detail & Related papers (2020-03-24T00:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.