Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation
- URL: http://arxiv.org/abs/2511.06341v1
- Date: Sun, 09 Nov 2025 11:51:15 GMT
- Title: Scalable Verification of Neural Control Barrier Functions Using Linear Bound Propagation
- Authors: Nikolaus Vertovec, Frederik Baymler Mathiesen, Thom Badings, Luca Laurenti, Alessandro Abate,
- Abstract summary: Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems.<n>We present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF.<n>Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs.
- Score: 50.53301323864253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Control barrier functions (CBFs) are a popular tool for safety certification of nonlinear dynamical control systems. Recently, CBFs represented as neural networks have shown great promise due to their expressiveness and applicability to a broad class of dynamics and safety constraints. However, verifying that a trained neural network is indeed a valid CBF is a computational bottleneck that limits the size of the networks that can be used. To overcome this limitation, we present a novel framework for verifying neural CBFs based on piecewise linear upper and lower bounds on the conditions required for a neural network to be a CBF. Our approach is rooted in linear bound propagation (LBP) for neural networks, which we extend to compute bounds on the gradients of the network. Combined with McCormick relaxation, we derive linear upper and lower bounds on the CBF conditions, thereby eliminating the need for computationally expensive verification procedures. Our approach applies to arbitrary control-affine systems and a broad range of nonlinear activation functions. To reduce conservatism, we develop a parallelizable refinement strategy that adaptively refines the regions over which these bounds are computed. Our approach scales to larger neural networks than state-of-the-art verification procedures for CBFs, as demonstrated by our numerical experiments.
Related papers
- Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Neural Control Barrier Functions from Physics Informed Neural Networks [2.092779643426281]
This paper introduces a novel class of neural CBFs that leverages a physics-inspired neural network framework.<n>By utilizing reciprocal CBFs instead of zeroing CBFs, the proposed framework allows for the specification of flexible, user-defined safe regions.
arXiv Detail & Related papers (2025-04-15T10:13:30Z) - CP-NCBF: A Conformal Prediction-based Approach to Synthesize Verified Neural Control Barrier Functions [2.092779643426281]
Control Barrier Functions (CBFs) are a practical approach for designing safety-critical controllers.<n>Recent efforts have explored learning-based methods, such as neural CBFs, to address this issue.<n>We propose a novel framework that leverages split-conformal prediction to generate formally verified neural CBFs.
arXiv Detail & Related papers (2025-03-18T10:01:06Z) - Verification of Neural Control Barrier Functions with Symbolic Derivative Bounds Propagation [6.987300771372427]
We propose a new efficient verification framework for ReLU-based neural CBFs.
We show that the symbolic bounds can be propagated through the inner product of neural CBF Jacobian and nonlinear system dynamics.
arXiv Detail & Related papers (2024-10-04T21:42:25Z) - Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective techniques for neural network (NN) verification.<n>We develop a general framework, named GenBaB, to conduct BaB on general nonlinearities to verify NNs with general architectures.<n>Our framework also allows the verification of general nonlinear graphs and enables verification applications beyond simple NNs.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - Exact Verification of ReLU Neural Control Barrier Functions [25.44521208451216]
Control Barrier Functions (CBFs) are a popular approach for safe control of nonlinear systems.
Recent machine learning methods that represent CBFs as neural networks have shown great promise.
This paper presents novel exact conditions and algorithms for verifying safety of feedforward NCBFs with ReLU activation functions.
arXiv Detail & Related papers (2023-10-13T18:59:04Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Neural network training under semidefinite constraints [0.0]
This paper is concerned with the training of neural networks (NNs) under semidefinite constraints.
Semidefinite constraints can be used to verify interesting properties for NNs.
In experiments, we demonstrate the superior efficiency of our training method over previous approaches.
arXiv Detail & Related papers (2022-01-03T13:10:49Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Towards Evaluating and Training Verifiably Robust Neural Networks [81.39994285743555]
We study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines.
We propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors.
arXiv Detail & Related papers (2021-04-01T13:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.