Reinforcement Learning Policies in Continuous-Time Linear Systems
- URL: http://arxiv.org/abs/2109.07630v3
- Date: Wed, 7 Jun 2023 23:36:25 GMT
- Title: Reinforcement Learning Policies in Continuous-Time Linear Systems
- Authors: Mohamad Kazem Shirani Faradonbeh, Mohamad Sadegh Shirani Faradonbeh
- Abstract summary: We present online policies that learn optimal actions fast by carefully randomizing the parameter estimates.
We prove sharp stability results for inexact system dynamics and tightly specify the infinitesimal regret caused by sub-optimal actions.
Our analysis sheds light on fundamental challenges in continuous-time reinforcement learning and suggests a useful cornerstone for similar problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear dynamical systems that obey stochastic differential equations are
canonical models. While optimal control of known systems has a rich literature,
the problem is technically hard under model uncertainty and there are hardly
any results. We initiate study of this problem and aim to learn (and
simultaneously deploy) optimal actions for minimizing a quadratic cost
function. Indeed, this work is the first that comprehensively addresses the
crucial challenge of balancing exploration versus exploitation in
continuous-time systems. We present online policies that learn optimal actions
fast by carefully randomizing the parameter estimates, and establish their
performance guarantees: a regret bound that grows with square-root of time
multiplied by the number of parameters. Implementation of the policy for a
flight-control task demonstrates its efficacy. Further, we prove sharp
stability results for inexact system dynamics and tightly specify the
infinitesimal regret caused by sub-optimal actions. To obtain the results, we
conduct a novel eigenvalue-sensitivity analysis for matrix perturbation,
establish upper-bounds for comparative ratios of stochastic integrals, and
introduce the new method of policy differentiation. Our analysis sheds light on
fundamental challenges in continuous-time reinforcement learning and suggests a
useful cornerstone for similar problems.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Stochastic Reinforcement Learning with Stability Guarantees for Control of Unknown Nonlinear Systems [6.571209126567701]
We propose a reinforcement learning algorithm that stabilizes the system by learning a local linear representation ofthe dynamics.
We demonstrate the effectiveness of our algorithm on several challenging high-dimensional dynamical systems.
arXiv Detail & Related papers (2024-09-12T20:07:54Z) - Sublinear Regret for a Class of Continuous-Time Linear--Quadratic Reinforcement Learning Problems [10.404992912881601]
We study reinforcement learning for a class of continuous-time linear-quadratic (LQ) control problems for diffusions.
We apply a model-free approach that relies neither on knowledge of model parameters nor on their estimations, and devise an actor-critic algorithm to learn the optimal policy parameter directly.
arXiv Detail & Related papers (2024-07-24T12:26:21Z) - Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - Risk-Sensitive Stochastic Optimal Control as Rao-Blackwellized Markovian
Score Climbing [3.9410617513331863]
optimal control of dynamical systems is a crucial challenge in sequential decision-making.
Control-as-inference approaches have had considerable success, providing a viable risk-sensitive framework to address the exploration-exploitation dilemma.
This paper introduces a novel perspective by framing risk-sensitive control as Markovian reinforcement score climbing under samples drawn from a conditional particle filter.
arXiv Detail & Related papers (2023-12-21T16:34:03Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Regret Analysis of Certainty Equivalence Policies in Continuous-Time
Linear-Quadratic Systems [0.0]
This work studies theoretical performance guarantees of a ubiquitous reinforcement learning policy for controlling the canonical model of linear-quadratic system.
We establish square-root of time regret bounds, indicating that randomized certainty equivalent policy learns optimal control actions fast from a single state trajectory.
arXiv Detail & Related papers (2022-06-09T11:47:36Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.