Learning Expected Reward for Switched Linear Control Systems: A
Non-Asymptotic View
- URL: http://arxiv.org/abs/2006.08105v1
- Date: Mon, 15 Jun 2020 03:13:17 GMT
- Title: Learning Expected Reward for Switched Linear Control Systems: A
Non-Asymptotic View
- Authors: Muhammad Abdullah Naeem, Miroslav Pajic
- Abstract summary: We show existence of invariant ergodic measure for switched linear dynamical systems.
We derive non-asymptotic bounds for learning expected reward.
- Score: 10.051309746913512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we show existence of invariant ergodic measure for switched
linear dynamical systems (SLDSs) under a norm-stability assumption of system
dynamics in some unbounded subset of $\mathbb{R}^{n}$. Consequently, given a
stationary Markov control policy, we derive non-asymptotic bounds for learning
expected reward (w.r.t the invariant ergodic measure our closed-loop system
mixes to) from time-averages using Birkhoff's Ergodic Theorem. The presented
results provide a foundation for deriving non-asymptotic analysis for average
reward-based optimal control of SLDSs. Finally, we illustrate the presented
theoretical results in two case-studies.
Related papers
- Stochastic Approximation with Unbounded Markovian Noise: A General-Purpose Theorem [7.443139252028032]
We consider average-reward Reinforcement Learning with unbounded state space and reward function.
Recent works studied this problem in the actor-critic framework.
We study Temporal Difference (TD) learning with linear function approximation.
arXiv Detail & Related papers (2024-10-29T03:40:53Z) - Method-of-Moments Inference for GLMs and Doubly Robust Functionals under Proportional Asymptotics [30.324051162373973]
We consider the estimation of regression coefficients and signal-to-noise ratio in high-dimensional Generalized Linear Models (GLMs)
We derive Consistent and Asymptotically Normal (CAN) estimators of our targets of inference.
We complement our theoretical results with numerical experiments and comparisons with existing literature.
arXiv Detail & Related papers (2024-08-12T12:43:30Z) - A finite-sample generalization bound for stable LPV systems [0.0]
We derive a PAC bound for stable continuous-time linear parameter-varying (LPV) systems.
Our bound depends on the H2 norm of the chosen class of the LPV systems, but does not depend on the time interval for which the signals are considered.
arXiv Detail & Related papers (2024-05-16T12:42:36Z) - BayesDAG: Gradient-Based Posterior Inference for Causal Discovery [30.027520859604955]
We introduce a scalable causal discovery framework based on a combination of Markov Chain Monte Carlo and Variational Inference.
Our approach directly samples DAGs from the posterior without requiring any DAG regularization.
We derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed estimator defined over permutations.
arXiv Detail & Related papers (2023-07-26T02:34:13Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators [62.31425348954686]
We propose the combination of a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression.
We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty.
arXiv Detail & Related papers (2021-12-10T11:09:29Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Neural Stochastic Contraction Metrics for Learning-based Control and
Estimation [13.751135823626493]
The NSCM framework allows autonomous agents to approximate optimal stable control and estimation policies in real-time.
It outperforms existing nonlinear control and estimation techniques including the state-dependent Riccati equation, iterative LQR, EKF, and the neural contraction.
arXiv Detail & Related papers (2020-11-06T03:04:42Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.