Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty
- URL: http://arxiv.org/abs/2106.07108v1
- Date: Sun, 13 Jun 2021 23:08:49 GMT
- Title: Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty
- Authors: Fernando Casta\~neda, Jason J. Choi, Bike Zhang, Claire J. Tomlin,
Koushil Sreenath
- Abstract summary: Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
- Score: 77.18483084440182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are
popular tools for enforcing safety and stability of a controlled system,
respectively. They are commonly utilized to build constraints that can be
incorporated in a min-norm quadratic program (CBF-CLF-QP) which solves for a
safety-critical control input. However, since these constraints rely on a model
of the system, when this model is inaccurate the guarantees of safety and
stability can be easily lost. In this paper, we present a Gaussian Process
(GP)-based approach to tackle the problem of model uncertainty in
safety-critical controllers that use CBFs and CLFs. The considered model
uncertainty is affected by both state and control input. We derive
probabilistic bounds on the effects that such model uncertainty has on the
dynamics of the CBF and CLF. Then, we use these bounds to build safety and
stability chance constraints that can be incorporated in a min-norm convex
optimization program, called GP-CBF-CLF-SOCP. As the main theoretical result of
the paper, we present necessary and sufficient conditions for pointwise
feasibility of the proposed optimization problem. We believe that these
conditions could serve as a starting point towards understanding what are the
minimal requirements on the distribution of data collected from the real system
in order to guarantee safety. Finally, we validate the proposed framework with
numerical simulations of an adaptive cruise controller for an automotive
system.
Related papers
- Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Neural Lyapunov Model Predictive Control: Learning Safe Global
Controllers from Sub-optimal Examples [4.777323087050061]
In many real-world and industrial applications, it is typical to have an existing control strategy, for instance, execution from a human operator.
The objective of this work is to improve upon this unknown, safe but suboptimal policy by learning a new controller that retains safety and stability.
The proposed algorithm alternatively learns the terminal cost and updates the MPC parameters according to a stability metric.
arXiv Detail & Related papers (2020-02-21T16:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.