Exact Verification of ReLU Neural Control Barrier Functions
- URL: http://arxiv.org/abs/2310.09360v1
- Date: Fri, 13 Oct 2023 18:59:04 GMT
- Title: Exact Verification of ReLU Neural Control Barrier Functions
- Authors: Hongchao Zhang, Junlin Wu, Yevgeniy Vorobeychik, Andrew Clark
- Abstract summary: Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems.
Recent machine learning methods that represent CBFs as neural networks have shown great promise.
This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions.
- Score: 25.44521208451216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Control Barrier Functions (CBFs) are a popular approach for safe control of
nonlinear systems. In CBF-based control, the desired safety properties of the
system are mapped to nonnegativity of a CBF, and the control input is chosen to
ensure that the CBF remains nonnegative for all time. Recently, machine
learning methods that represent CBFs as neural networks (neural control barrier
functions, or NCBFs) have shown great promise due to the universal
representability of neural networks. However, verifying that a learned CBF
guarantees safety remains a challenging research problem. This paper presents
novel exact conditions and algorithms for verifying safety of feedforward NCBFs
with ReLU activation functions. The key challenge in doing so is that, due to
the piecewise linearity of the ReLU function, the NCBF will be
nondifferentiable at certain points, thus invalidating traditional safety
verification methods that assume a smooth barrier function. We resolve this
issue by leveraging a generalization of Nagumo's theorem for proving invariance
of sets with nonsmooth boundaries to derive necessary and sufficient conditions
for safety. Based on this condition, we propose an algorithm for safety
verification of NCBFs that first decomposes the NCBF into piecewise linear
segments and then solves a nonlinear program to verify safety of each segment
as well as the intersections of the linear segments. We mitigate the complexity
by only considering the boundary of the safe region and by pruning the segments
with Interval Bound Propagation (IBP) and linear relaxation. We evaluate our
approach through numerical studies with comparison to state-of-the-art
SMT-based methods. Our code is available at
https://github.com/HongchaoZhang-HZ/exactverif-reluncbf-nips23.
Related papers
- Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints [50.920465513162334]
We introduce the PCBF algorithm to maximize the inner safe set of dynamical systems under input constraints.
We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system.
Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.
arXiv Detail & Related papers (2024-10-05T18:45:19Z) - Verification of Neural Control Barrier Functions with Symbolic Derivative Bounds Propagation [6.987300771372427]
We propose a new efficient verification framework for ReLU-based neural CBFs.
We show that the symbolic bounds can be propagated through the inner product of neural CBF Jacobian and nonlinear system dynamics.
arXiv Detail & Related papers (2024-10-04T21:42:25Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Safe Control Under Input Limits with Neural Control Barrier Functions [3.5270468102327004]
We propose new methods to synthesize control barrier function (CBF)-based safe controllers that avoid input saturation.
We leverage techniques from machine learning, like neural networks and deep learning, to simplify this challenging problem in nonlinear control design.
We provide empirical results on a 10D state, 4D input quadcopter-pendulum system.
arXiv Detail & Related papers (2022-11-20T19:01:37Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.