Sample-efficient Safe Learning for Online Nonlinear Control with Control
Barrier Functions
- URL: http://arxiv.org/abs/2207.14419v1
- Date: Fri, 29 Jul 2022 00:54:35 GMT
- Title: Sample-efficient Safe Learning for Online Nonlinear Control with Control
Barrier Functions
- Authors: Wenhao Luo, Wen Sun and Ashish Kapoor
- Abstract summary: Reinforcement Learning and continuous nonlinear control have been successfully deployed in multiple domains of complicated sequential decision-making tasks.
Given the exploration nature of the learning process and the presence of model uncertainty, it is challenging to apply them to safety-critical control tasks.
We propose a emphprovably efficient episodic safe learning framework for online control tasks.
- Score: 35.9713619595494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement Learning (RL) and continuous nonlinear control have been
successfully deployed in multiple domains of complicated sequential
decision-making tasks. However, given the exploration nature of the learning
process and the presence of model uncertainty, it is challenging to apply them
to safety-critical control tasks due to the lack of safety guarantee. On the
other hand, while combining control-theoretical approaches with learning
algorithms has shown promise in safe RL applications, the sample efficiency of
safe data collection process for control is not well addressed. In this paper,
we propose a \emph{provably} sample efficient episodic safe learning framework
for online control tasks that leverages safe exploration and exploitation in an
unknown, nonlinear dynamical system. In particular, the framework 1) extends
control barrier functions (CBFs) in a stochastic setting to achieve provable
high-probability safety under uncertainty during model learning and 2)
integrates an optimism-based exploration strategy to efficiently guide the safe
exploration process with learned dynamics for \emph{near optimal} control
performance. We provide formal analysis on the episodic regret bound against
the optimal controller and probabilistic safety with theoretical guarantees.
Simulation results are provided to demonstrate the effectiveness and efficiency
of the proposed algorithm.
Related papers
- Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical
Systems [15.863561935347692]
We develop provably safe and convergent reinforcement learning algorithms for control of nonlinear dynamical systems.
Recent advances at the intersection of control and RL follow a two-stage, safety filter approach to enforcing hard safety constraints.
We develop a single-stage, sampling-based approach to hard constraint satisfaction that learns RL controllers enjoying classical convergence guarantees.
arXiv Detail & Related papers (2024-03-06T19:39:20Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Reinforcement Learning Control of Constrained Dynamic Systems with
Uniformly Ultimate Boundedness Stability Guarantee [12.368097742148128]
Reinforcement learning (RL) is promising for complicated nonlinear control problems.
The data-based learning approach is notorious for not guaranteeing stability, which is the most fundamental property for any control system.
In this paper, the classic Lyapunov's method is explored to analyze the uniformly ultimate boundedness stability (UUB) solely based on data.
arXiv Detail & Related papers (2020-11-13T12:41:56Z) - Neural Lyapunov Redesign [36.2939747271983]
Learning controllers must guarantee some notion of safety to ensure that it does not harm either the agent or the environment.
Lyapunov functions are effective tools to assess stability in nonlinear dynamical systems.
We propose a two-player collaborative algorithm that alternates between estimating a Lyapunov function and deriving a controller that gradually enlarges the stability region.
arXiv Detail & Related papers (2020-06-06T19:22:20Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Cautious Reinforcement Learning with Logical Constraints [78.96597639789279]
An adaptive safe padding forces Reinforcement Learning (RL) to synthesise optimal control policies while ensuring safety during the learning process.
Theoretical guarantees are available on the optimality of the synthesised policies and on the convergence of the learning algorithm.
arXiv Detail & Related papers (2020-02-26T00:01:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.