Learning Control Barrier Functions from Expert Demonstrations
- URL: http://arxiv.org/abs/2004.03315v3
- Date: Mon, 9 Nov 2020 00:02:13 GMT
- Title: Learning Control Barrier Functions from Expert Demonstrations
- Authors: Alexander Robey, Haimin Hu, Lars Lindemann, Hanwen Zhang, Dimos V.
Dimarogonas, Stephen Tu, Nikolai Matni
- Abstract summary: We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
- Score: 69.23675822701357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the success of imitation and inverse reinforcement learning in
replicating expert behavior through optimal control, we propose a learning
based approach to safe controller synthesis based on control barrier functions
(CBFs). We consider the setting of a known nonlinear control affine dynamical
system and assume that we have access to safe trajectories generated by an
expert - a practical example of such a setting would be a kinematic model of a
self-driving vehicle with safe trajectories (e.g., trajectories that avoid
collisions with obstacles in the environment) generated by a human driver. We
then propose and analyze an optimization-based approach to learning a CBF that
enjoys provable safety guarantees under suitable Lipschitz smoothness
assumptions on the underlying dynamical system. A strength of our approach is
that it is agnostic to the parameterization used to represent the CBF, assuming
only that the Lipschitz constant of such functions can be efficiently bounded.
Furthermore, if the CBF parameterization is convex, then under mild
assumptions, so is our learning process. We end with extensive numerical
evaluations of our results on both planar and realistic examples, using both
random feature and deep neural network parameterizations of the CBF. To the
best of our knowledge, these are the first results that learn provably safe
control barrier functions from data.
Related papers
- Domain Adaptive Safety Filters via Deep Operator Learning [5.62479170374811]
We propose a self-supervised deep operator learning framework that learns the mapping from environmental parameters to the corresponding CBF.
We demonstrate the effectiveness of the method through numerical experiments on navigation tasks involving dynamic obstacles.
arXiv Detail & Related papers (2024-10-18T15:10:55Z) - Safe and Stable Closed-Loop Learning for Neural-Network-Supported Model Predictive Control [0.0]
We consider safe learning of parametrized predictive controllers that operate with incomplete information about the underlying process.
Our method focuses on the system's overall long-term performance in closed-loop while keeping it safe and stable.
We explicitly incorporated stability information in the Bayesian-optimization-based learning procedure, thereby achieving rigorous probabilistic safety guarantees.
arXiv Detail & Related papers (2024-09-16T11:03:58Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Model-Assisted Probabilistic Safe Adaptive Control With Meta-Bayesian
Learning [33.75998206184497]
We develop a novel adaptive safe control framework that integrates meta learning, Bayesian models, and control barrier function (CBF) method.
Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression model.
For a new control task, we refine the meta-learned models using a few samples, and introduce pessimistic confidence bounds into CBF constraints to ensure safe control.
arXiv Detail & Related papers (2023-07-03T08:16:01Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.