Verification of safety critical control policies using kernel methods
- URL: http://arxiv.org/abs/2203.12407v1
- Date: Wed, 23 Mar 2022 13:33:02 GMT
- Title: Verification of safety critical control policies using kernel methods
- Authors: Nikolaus Vertovec, Sina Ober-Bl\"obaum, Kostas Margellos
- Abstract summary: We propose a framework for modeling the error of the value function inherent in Hamilton-Jacobi reachability using a Gaussian process.
The derived safety controller can be used in conjuncture with arbitrary controllers to provide a safe hybrid control law.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hamilton-Jacobi reachability methods for safety-critical control have been
well studied, but the safety guarantees derived rely on the accuracy of the
numerical computation. Thus, it is crucial to understand and account for any
inaccuracies that occur due to uncertainty in the underlying dynamics and
environment as well as the induced numerical errors. To this end, we propose a
framework for modeling the error of the value function inherent in
Hamilton-Jacobi reachability using a Gaussian process. The derived safety
controller can be used in conjuncture with arbitrary controllers to provide a
safe hybrid control law. The marginal likelihood of the Gaussian process then
provides a confidence metric used to determine switches between a least
restrictive controller and a safety controller. We test both the prediction as
well as the correction capabilities of the presented method in a classical
pursuit-evasion example.
Related papers
- DADEE: Well-calibrated uncertainty quantification in neural networks for barriers-based robot safety [1.680461336282617]
Control Barrier Functions (CBFs) based approaches are popular because they are fast, yet safe.
GPs and MC-Dropout for learning and uncertainty estimation come with drawbacks.
GPs are non-parametric methods that are slow, while MC-Dropout does not capture aleatoric uncertainty.
We combine the two approaches to obtain more accurate uncertainty estimates both in- and out-of-domain.
arXiv Detail & Related papers (2024-06-30T07:55:32Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - ProBF: Learning Probabilistic Safety Certificates with Barrier Functions [31.203344483485843]
The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics.
In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors.
We show the efficacy of this method through experiments on Segway and Quadrotor simulations.
arXiv Detail & Related papers (2021-12-22T20:18:18Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.