Learning nonlinear dynamical systems from a single trajectory
- URL: http://arxiv.org/abs/2004.14681v1
- Date: Thu, 30 Apr 2020 10:42:48 GMT
- Title: Learning nonlinear dynamical systems from a single trajectory
- Authors: Dylan J. Foster, Alexander Rakhlin, Tuhin Sarkar
- Abstract summary: We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
- Score: 102.60042167341956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce algorithms for learning nonlinear dynamical systems of the form
$x_{t+1}=\sigma(\Theta^{\star}x_t)+\varepsilon_t$, where $\Theta^{\star}$ is a
weight matrix, $\sigma$ is a nonlinear link function, and $\varepsilon_t$ is a
mean-zero noise process. We give an algorithm that recovers the weight matrix
$\Theta^{\star}$ from a single trajectory with optimal sample complexity and
linear running time. The algorithm succeeds under weaker statistical
assumptions than in previous work, and in particular i) does not require a
bound on the spectral norm of the weight matrix $\Theta^{\star}$ (rather, it
depends on a generalization of the spectral radius) and ii) enjoys guarantees
for non-strictly-increasing link functions such as the ReLU. Our analysis has
two key components: i) we give a general recipe whereby global stability for
nonlinear dynamical systems can be used to certify that the state-vector
covariance is well-conditioned, and ii) using these tools, we extend well-known
algorithms for efficiently learning generalized linear models to the dependent
setting.
Related papers
- Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Fast and Practical Quantum-Inspired Classical Algorithms for Solving
Linear Systems [11.929584800629673]
We propose fast and practical quantum-inspired classical algorithms for solving linear systems.
Our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems.
arXiv Detail & Related papers (2023-07-13T08:46:19Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Near-optimal Offline and Streaming Algorithms for Learning Non-Linear
Dynamical Systems [45.17023170054112]
We consider the setting of vector valued non-linear dynamical systems $X_t+1 = phi(A* X_t) + eta_t$, where $eta_t$ is unbiased noise and $phi : mathbbR to mathbbR$ is a known link function that satisfies certain em expansivity property.
arXiv Detail & Related papers (2021-05-24T22:14:26Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Truncated Linear Regression in High Dimensions [26.41623833920794]
In truncated linear regression, $(A_i, y_i)_i$ whose dependent variable equals $y_i= A_irm T cdot x* + eta_i$ is some fixed unknown vector of interest.
The goal is to recover $x*$ under some favorable conditions on the $A_i$'s and the noise distribution.
We prove that there exists a computationally and statistically efficient method for recovering $k$-sparse $n$-dimensional vectors $x*$ from $m$ truncated samples.
arXiv Detail & Related papers (2020-07-29T00:31:34Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems [34.394552166070746]
We study gradient based algorithms to learn the system dynamics $theta$ from samples obtained from a single finite trajectory.
Unlike existing work, our bounds are noise sensitive which allows for learning ground-truth dynamics with high accuracy and small sample complexity.
arXiv Detail & Related papers (2020-02-20T02:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.