ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
- URL: http://arxiv.org/abs/2112.12210v1
- Date: Wed, 22 Dec 2021 20:18:18 GMT
- Title: ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
- Authors: Sulin Liu, Athindran Ramesh Kumar, Jaime F. Fisac, Ryan P. Adams,
Peter J. Ramadge
- Abstract summary: The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics.
In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors.
We show the efficacy of this method through experiments on Segway and Quadrotor simulations.
- Score: 31.203344483485843
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Safety-critical applications require controllers/policies that can guarantee
safety with high confidence. The control barrier function is a useful tool to
guarantee safety if we have access to the ground-truth system dynamics. In
practice, we have inaccurate knowledge of the system dynamics, which can lead
to unsafe behaviors due to unmodeled residual dynamics. Learning the residual
dynamics with deterministic machine learning models can prevent the unsafe
behavior but can fail when the predictions are imperfect. In this situation, a
probabilistic learning method that reasons about the uncertainty of its
predictions can help provide robust safety margins. In this work, we use a
Gaussian process to model the projection of the residual dynamics onto a
control barrier function. We propose a novel optimization procedure to generate
safe controls that can guarantee safety with high probability. The safety
filter is provided with the ability to reason about the uncertainty of the
predictions from the GP. We show the efficacy of this method through
experiments on Segway and Quadrotor simulations. Our proposed probabilistic
approach is able to reduce the number of safety violations significantly as
compared to the deterministic approach with a neural network.
Related papers
- Statistical Safety and Robustness Guarantees for Feedback Motion
Planning of Unknown Underactuated Stochastic Systems [1.0323063834827415]
We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound.
We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot.
arXiv Detail & Related papers (2022-12-13T19:38:39Z) - ISAACS: Iterative Soft Adversarial Actor-Critic for Safety [0.9217021281095907]
This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems.
A safety-seeking fallback policy is co-trained with an adversarial "disturbance" agent that aims to invoke the worst-case realization of model error.
While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter.
arXiv Detail & Related papers (2022-12-06T18:53:34Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Verification of safety critical control policies using kernel methods [0.0]
We propose a framework for modeling the error of the value function inherent in Hamilton-Jacobi reachability using a Gaussian process.
The derived safety controller can be used in conjuncture with arbitrary controllers to provide a safe hybrid control law.
arXiv Detail & Related papers (2022-03-23T13:33:02Z) - Fail-Safe Adversarial Generative Imitation Learning [9.594432031144716]
We propose a safety layer that enables a closed-form probability density/gradient of the safe generative continuous policy, end-to-end generative adversarial training, and worst-case safety guarantees.
The safety layer maps all actions into a set of safe actions, and uses the change-of-variables formula plus additivity of measures for the density.
In an experiment on real-world driver interaction data, we empirically demonstrate tractability, safety and imitation performance of our approach.
arXiv Detail & Related papers (2022-03-03T13:03:06Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.