Symmetric Linear Dynamical Systems are Learnable from Few Observations
- URL: http://arxiv.org/abs/2512.05337v1
- Date: Fri, 05 Dec 2025 00:33:31 GMT
- Title: Symmetric Linear Dynamical Systems are Learnable from Few Observations
- Authors: Minh Vu, Andrey Y. Lokhov, Marc Vuffray,
- Abstract summary: We introduce and analyze a new estimator that achieves a small maximum element-wise error on the recovery of symmetric dynamic matrices.<n>This estimator is based on the method of moments and does not rely on problem-specific regularization.
- Score: 3.7910294046839845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of learning the parameters of a $N$-dimensional stochastic linear dynamics under both full and partial observations from a single trajectory of time $T$. We introduce and analyze a new estimator that achieves a small maximum element-wise error on the recovery of symmetric dynamic matrices using only $T=\mathcal{O}(\log N)$ observations, irrespective of whether the matrix is sparse or dense. This estimator is based on the method of moments and does not rely on problem-specific regularization. This is especially important for applications such as structure discovery.
Related papers
- Joint learning of a network of linear dynamical systems via total variation penalization [4.6390064640459]
We consider the problem of joint estimation of the parameters of $m$ linear dynamical systems.<n>We derive non-asymptotic bounds on the mean squared error which hold with high probability.
arXiv Detail & Related papers (2025-11-24T04:07:46Z) - Explicit Discovery of Nonlinear Symmetries from Dynamic Data [50.20526548924647]
LieNLSD is the first method capable of determining the number of infinitesimal generators with nonlinear terms and their explicit expressions.<n>LieNLSD shows qualitative advantages over existing methods and improves the long rollout accuracy of neural PDE solvers by over 20%.
arXiv Detail & Related papers (2025-10-02T09:54:08Z) - The Sample Complexity of Online Reinforcement Learning: A Multi-model Perspective [55.15192437680943]
We study the sample complexity of online reinforcement learning in the general setting of nonlinear dynamical systems with continuous state and action spaces.<n>Our algorithm achieves a policy regret of $mathcalO(N epsilon2 + mathrmln(m(epsilon)/epsilon2)$, where $epsilon$ is the time horizon.<n>In the special case where the dynamics are parametrized by a compact and real-valued set of parameters, we prove a policy regret of $mathcalO(sqrt
arXiv Detail & Related papers (2025-01-27T10:01:28Z) - Learning Linear Dynamics from Bilinear Observations [8.238163867581848]
We consider the problem of learning a realization of a partially observed dynamical system with linear state transitions and bilinear observations.
Under very mild assumptions on the process and measurement noises, we provide a finite time analysis for learning the unknown dynamics matrices.
arXiv Detail & Related papers (2024-09-24T23:11:47Z) - From Spectral Theorem to Statistical Independence with Application to
System Identification [11.98319841778396]
We provide first quantitative handle on decay rate of finite powers of state transition matrix $|Ak|$.
It is shown that when a stable dynamical system has only one distinct eigenvalue and discrepancy of $n-1$: $|A|$ has a dependence on $n$, resulting dynamics are inseparable.
We show that element-wise error is essentially a variant of well-know Littlewood-Offord problem.
arXiv Detail & Related papers (2023-10-16T15:40:43Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Single Trajectory Nonparametric Learning of Nonlinear Dynamics [8.438421942654292]
Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE)
We leverage recently developed information-theoretic methods to establish the optimality of the LSE for non hypotheses classes.
We specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS)
arXiv Detail & Related papers (2022-02-16T19:38:54Z) - Minimax Estimation of Linear Functions of Eigenvectors in the Face of
Small Eigen-Gaps [95.62172085878132]
Eigenvector perturbation analysis plays a vital role in various statistical data science applications.
We develop a suite of statistical theory that characterizes the perturbation of arbitrary linear functions of an unknown eigenvector.
In order to mitigate a non-negligible bias issue inherent to the natural "plug-in" estimator, we develop de-biased estimators.
arXiv Detail & Related papers (2021-04-07T17:55:10Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Information-Theoretic Limits for the Matrix Tensor Product [8.206394018475708]
This paper studies a high-dimensional inference problem involving the matrix tensor product of random matrices.
On the technical side, this paper introduces some new techniques for the analysis of high-dimensional matrix-preserving signals.
arXiv Detail & Related papers (2020-05-22T17:03:48Z) - Finite-time Identification of Stable Linear Systems: Optimality of the
Least-Squares Estimator [79.3239137440876]
We present a new finite-time analysis of the estimation error of the Ordinary Least Squares (OLS) estimator for stable linear time-invariant systems.
We characterize the number of observed samples sufficient for the OLS estimator to be $(varepsilon,delta)$-PAC, i.e., to yield an estimation error less than $varepsilon$ with probability at least $1-delta$.
arXiv Detail & Related papers (2020-03-17T20:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.