High Effort, Low Gain: Fundamental Limits of Active Learning for Linear Dynamical Systems
- URL: http://arxiv.org/abs/2509.11907v1
- Date: Mon, 15 Sep 2025 13:29:24 GMT
- Title: High Effort, Low Gain: Fundamental Limits of Active Learning for Linear Dynamical Systems
- Authors: Nicolas Chatzikiriakos, Kevin Jamieson, Andrea Iannelli,
- Abstract summary: We consider the problem of identifying an unknown linear dynamical system given a finite hypothesis class.<n>We present sample complexity lower bounds that capture the choice of the selected excitation input.<n>We propose an active learning algorithm that sequentially excites the system optimally with respect to the current estimate.
- Score: 1.530715277464342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we consider the problem of identifying an unknown linear dynamical system given a finite hypothesis class. In particular, we analyze the effect of the excitation input on the sample complexity of identifying the true system with high probability. To this end, we present sample complexity lower bounds that capture the choice of the selected excitation input. The sample complexity lower bound gives rise to a system theoretic condition to determine the potential benefit of experiment design. Informed by the analysis of the sample complexity lower bound, we propose a persistent excitation (PE) condition tailored to the considered setting, which we then use to establish sample complexity upper bounds. Notably, the \acs{PE} condition is weaker than in the case of an infinite hypothesis class and allows analyzing different excitation inputs modularly. Crucially, the lower and upper bounds share the same dependency on key problem parameters. Finally, we leverage these insights to propose an active learning algorithm that sequentially excites the system optimally with respect to the current estimate, and provide sample complexity guarantees for the presented algorithm. Concluding simulations showcase the effectiveness of the proposed algorithm.
Related papers
- Are Randomized Quantum Linear Systems Solvers Practical? [0.0]
randomized quantum algorithms have been proposed in the context of quantum simulation and quantum linear algebra.<n>We provide explicit bounds on all relevant parameters that control the total error for a randomized quantum linear systems solver.<n>Our work serves as a bridge between theoretical algorithmic proposals and efficient hardware implementations.
arXiv Detail & Related papers (2025-10-15T17:12:55Z) - Unlocking Symbol-Level Precoding Efficiency Through Tensor Equivariant Neural Network [84.22115118596741]
We propose an end-to-end deep learning (DL) framework with low inference complexity for symbol-level precoding.<n>We show that the proposed framework captures substantial performance gains of optimal SLP, while achieving an approximately 80-times speedup over conventional methods.
arXiv Detail & Related papers (2025-10-02T15:15:50Z) - The Sample Complexity of Online Reinforcement Learning: A Multi-model Perspective [55.15192437680943]
We study the sample complexity of online reinforcement learning in the general setting of nonlinear dynamical systems with continuous state and action spaces.<n>Our algorithm achieves a policy regret of $mathcalO(N epsilon2 + mathrmln(m(epsilon)/epsilon2)$, where $epsilon$ is the time horizon.<n>In the special case where the dynamics are parametrized by a compact and real-valued set of parameters, we prove a policy regret of $mathcalO(sqrt
arXiv Detail & Related papers (2025-01-27T10:01:28Z) - Sample Complexity Bounds for Linear System Identification from a Finite Set [0.0]
We use the maximum likelihood estimator to identify the true system.<n>We leverage tools from information theory to provide a lower bound to the sample complexity.<n>The derived sample complexity bounds are analyzed analytically and numerically.
arXiv Detail & Related papers (2024-09-17T12:52:16Z) - A least-square method for non-asymptotic identification in linear switching control [17.938732931331064]
It is known that the underlying partially-observed linear dynamical system lies within a finite collection of known candidate models.
We characterize the finite-time sample complexity of this problem by leveraging recent advances in the non-asymptotic analysis of linear least-square methods.
We propose a data-driven switching strategy that identifies the unknown parameters of the underlying system.
arXiv Detail & Related papers (2024-04-11T20:55:38Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Faster Stochastic Variance Reduction Methods for Compositional MiniMax
Optimization [50.10952609321302]
compositional minimax optimization is a pivotal challenge across various machine learning domains.
Current methods of compositional minimax optimization are plagued by sub-optimal complexities or heavy reliance on sizable batch sizes.
This paper introduces a novel method, called Nested STOchastic Recursive Momentum (NSTORM), which can achieve the optimal sample complexity of $O(kappa3 /epsilon3 )$.
arXiv Detail & Related papers (2023-08-18T14:57:21Z) - Bayesian sequential design of computer experiments for quantile set inversion [0.0]
We consider an unknown multivariate function representing a system-such as a complex numerical simulator.<n>Our objective is to estimate the set of deterministic inputs leading to outputs whose probability is less than a given threshold.
arXiv Detail & Related papers (2022-11-02T10:14:05Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Expectation propagation on the diluted Bayesian classifier [0.0]
We introduce a statistical mechanics inspired strategy that addresses the problem of sparse feature selection in the context of binary classification.
A computational scheme known as expectation propagation (EP) is used to train a continuous-weights perceptron learning a classification rule.
EP is a robust and competitive algorithm in terms of variable selection properties, estimation accuracy and computational complexity.
arXiv Detail & Related papers (2020-09-20T23:59:44Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.