Learning Performance-Oriented Control Barrier Functions Under Complex
Safety Constraints and Limited Actuation
- URL: http://arxiv.org/abs/2401.05629v1
- Date: Thu, 11 Jan 2024 02:51:49 GMT
- Title: Learning Performance-Oriented Control Barrier Functions Under Complex
Safety Constraints and Limited Actuation
- Authors: Shaoru Chen, Mahyar Fazlyab
- Abstract summary: Control Barrier Functions (CBFs) provide a framework for designing safety filters for nonlinear control systems.
However, finding a CBF that concurrently maximizes the volume of the resulting control invariant set continues to pose a substantial challenge.
We propose a novel self-supervised learning framework that holistically addresses these hurdles.
- Score: 8.1585306387285
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Control Barrier Functions (CBFs) provide an elegant framework for designing
safety filters for nonlinear control systems by constraining their trajectories
to an invariant subset of a prespecified safe set. However, the task of finding
a CBF that concurrently maximizes the volume of the resulting control invariant
set while accommodating complex safety constraints, particularly in high
relative degree systems with actuation constraints, continues to pose a
substantial challenge. In this work, we propose a novel self-supervised
learning framework that holistically addresses these hurdles. Given a Boolean
composition of multiple state constraints that define the safe set, our
approach starts with building a single continuously differentiable function
whose 0-superlevel set provides an inner approximation of the safe set. We then
use this function together with a smooth neural network to parameterize the CBF
candidate. Finally, we design a training loss function based on a
Hamilton-Jacobi partial differential equation to train the CBF while enlarging
the volume of the induced control invariant set. We demonstrate the
effectiveness of our approach via numerical experiments.
Related papers
- Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - On the Optimality, Stability, and Feasibility of Control Barrier
Functions: An Adaptive Learning-Based Approach [4.399563188884702]
Control barrier function (CBF) and its variants have attracted extensive attention for safety-critical control.
There are still fundamental limitations of current CBFs: optimality, stability, and feasibility.
We propose Adaptive Multi-step Control Barrier Function (AM-CBF), where we parameterize the class-$mathcalK$ function by a neural network and train it together with the reinforcement learning policy.
arXiv Detail & Related papers (2023-05-05T15:11:28Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - Learning Differentiable Safety-Critical Control using Control Barrier
Functions for Generalization to Novel Environments [16.68313219331689]
Control barrier functions (CBFs) have become a popular tool to enforce safety of a control system.
We propose a differentiable optimization-based safety-critical control framework.
arXiv Detail & Related papers (2022-01-04T20:43:37Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.