Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions
- URL: http://arxiv.org/abs/2208.10733v2
- Date: Tue, 26 Sep 2023 21:40:19 GMT
- Title: Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions
- Authors: Fernando Casta\~neda, Jason J. Choi, Wonsuhk Jung, Bike Zhang, Claire
J. Tomlin, Koushil Sreenath
- Abstract summary: This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
- Score: 63.18590014127461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based control schemes have recently shown great efficacy performing
complex tasks for a wide variety of applications. However, in order to deploy
them in real systems, it is of vital importance to guarantee that the system
will remain safe during online training and execution. Among the currently most
popular methods to tackle this challenge, Control Barrier Functions (CBFs)
serve as mathematical tools that provide a formal safety-preserving control
synthesis procedure for systems with known dynamics. In this paper, we first
introduce a model-uncertainty-aware reformulation of CBF-based safety-critical
controllers using Gaussian Process (GP) regression to bridge the gap between an
approximate mathematical model and the real system. Compared to previous
approaches, we study the feasibility of the resulting robust safety-critical
controller. This feasibility analysis results in a set of richness conditions
that the available information about the system should satisfy to guarantee
that a safe control action can be found at all times. We then use these
conditions to devise an event-triggered online data collection strategy that
ensures the recursive feasibility of the learned safety-critical controller.
Our proposed methodology endows the system with the ability to reason at all
times about whether the current information at its disposal is enough to ensure
safety or if new measurements are required. This, in turn, allows us to provide
formal results of forward invariance of a safe set with high probability, even
in a priori unexplored regions. Finally, we validate the proposed framework in
numerical simulations of an adaptive cruise control system and a kinematic
vehicle.
Related papers
- Data-Driven Permissible Safe Control with Barrier Certificates [11.96747040086603]
This paper introduces a method of identifying a maximal set of safe strategies from data for systems with unknown dynamics.
Case studies show that increasing the size of the dataset for learning the system grows the permissible strategy set.
arXiv Detail & Related papers (2024-04-30T18:32:24Z) - Data-Driven Distributionally Robust Safety Verification Using Barrier Certificates and Conditional Mean Embeddings [0.24578723416255752]
We develop scalable formal verification algorithms without shifting the problem to unrealistic assumptions.
In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates.
We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope.
arXiv Detail & Related papers (2024-03-15T17:32:02Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Sample-efficient Safe Learning for Online Nonlinear Control with Control
Barrier Functions [35.9713619595494]
Reinforcement Learning and continuous nonlinear control have been successfully deployed in multiple domains of complicated sequential decision-making tasks.
Given the exploration nature of the learning process and the presence of model uncertainty, it is challenging to apply them to safety-critical control tasks.
We propose a emphprovably efficient episodic safe learning framework for online control tasks.
arXiv Detail & Related papers (2022-07-29T00:54:35Z) - Joint Differentiable Optimization and Verification for Certified
Reinforcement Learning [91.93635157885055]
In model-based reinforcement learning for safety-critical control systems, it is important to formally certify system properties.
We propose a framework that jointly conducts reinforcement learning and formal verification.
arXiv Detail & Related papers (2022-01-28T16:53:56Z) - ProBF: Learning Probabilistic Safety Certificates with Barrier Functions [31.203344483485843]
The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics.
In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors.
We show the efficacy of this method through experiments on Segway and Quadrotor simulations.
arXiv Detail & Related papers (2021-12-22T20:18:18Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Hybrid Control Barrier Functions from Data [66.37785052099423]
Motivated by the lack of systematic tools to obtain safe control laws for hybrid systems, we propose an optimization-based framework for learning certifiably safe control laws from data.
In particular, we assume a setting in which the system dynamics are known and in which data exhibiting safe system behavior is available.
arXiv Detail & Related papers (2020-11-08T23:55:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.