Linear RNNs Provably Learn Linear Dynamic Systems
- URL: http://arxiv.org/abs/2211.10582v2
- Date: Mon, 23 Oct 2023 02:46:13 GMT
- Title: Linear RNNs Provably Learn Linear Dynamic Systems
- Authors: Lifu Wang, Tianyu Wang, Shengwei Yi, Bo Shen, Bo Hu, Xing Cao
- Abstract summary: We prove the first theoretical guarantee on linear RNNs to learn any stable dynamic system.
Our results provide the first theoretical guarantee to learn a linear RNN and demonstrate can the help learn a system.
- Score: 14.919504424152292
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the learning ability of linear recurrent neural networks with
Gradient Descent. We prove the first theoretical guarantee on linear RNNs to
learn any stable linear dynamic system using any a large type of loss
functions. For an arbitrary stable linear system with a parameter $\rho_C$
related to the transition matrix $C$, we show that despite the non-convexity of
the parameter optimization loss if the width of the RNN is large enough (and
the required width in hidden layers does not rely on the length of the input
sequence), a linear RNN can provably learn any stable linear dynamic system
with the sample and time complexity polynomial in $\frac{1}{1-\rho_C}$. Our
results provide the first theoretical guarantee to learn a linear RNN and
demonstrate how can the recurrent structure help to learn a dynamic system.
Related papers
- Neural Contraction Metrics with Formal Guarantees for Discrete-Time Nonlinear Dynamical Systems [17.905596843865705]
Contraction metrics provide a powerful framework for analyzing stability, robustness, and convergence of various dynamical systems.<n>However, identifying these metrics for complex nonlinear systems remains an open challenge due to the lack of effective tools.<n>This paper develops verifiable contraction metrics for discrete scalable nonlinear systems.
arXiv Detail & Related papers (2025-04-23T21:27:32Z) - Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction [8.473495734873872]
We introduce Almost-Linear Recurrent Neural Networks (AL-RNNs) which automatically and robustly produce parsimonious PWL representations of Dynamical Systems (DS) from time series data.
AL-RNNs can be efficiently trained with any SOTA algorithm for dynamical systems reconstruction (DSR)
We show that for the Lorenz and R"ossler systems, AL-RNNs discover, in a purely data-driven way, the known topologically minimal PWL representations of the corresponding chaotic attractors.
arXiv Detail & Related papers (2024-10-18T07:44:12Z) - Metric-Entropy Limits on Nonlinear Dynamical System Learning [4.069144210024563]
We show that recurrent neural networks (RNNs) are capable of learning nonlinear systems that satisfy a Lipschitz property and forget past inputs fast enough in a metric-entropy optimal manner.
As the sets of sequence-to-sequence maps we consider are significantly more massive than function classes generally considered in deep neural network approximation theory, a refined metric-entropy characterization is needed.
arXiv Detail & Related papers (2024-07-01T12:57:03Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Tractable Dendritic RNNs for Reconstructing Nonlinear Dynamical Systems [7.045072177165241]
We augment a piecewise-linear recurrent neural network (RNN) by a linear spline basis expansion.
We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions.
arXiv Detail & Related papers (2022-07-06T09:43:03Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Metric Entropy Limits on Recurrent Neural Network Learning of Linear
Dynamical Systems [0.0]
We show that RNNs can optimally learn - or identify in system-theory parlance - stable LTI systems.
For LTI systems whose input-output relation is characterized through a difference equation, this means that RNNs can learn the difference equation from input-output traces in a metric-entropy optimal manner.
arXiv Detail & Related papers (2021-05-06T10:12:30Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Implicit Bias of Linear RNNs [27.41989861342218]
Linear recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory.
This paper provides a rigorous explanation of this property in the special case of linear RNNs.
Using recently-developed kernel regime analysis, our main result shows that linear RNNs are functionally equivalent to a certain weighted 1D-convolutional network.
arXiv Detail & Related papers (2021-01-19T19:39:28Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.