Neural Control Barrier Functions from Physics Informed Neural Networks
- URL: http://arxiv.org/abs/2504.11045v1
- Date: Tue, 15 Apr 2025 10:13:30 GMT
- Title: Neural Control Barrier Functions from Physics Informed Neural Networks
- Authors: Shreenabh Agrawal, Manan Tayal, Aditya Singh, Shishir Kolathaya,
- Abstract summary: This paper introduces a novel class of neural CBFs that leverages a physics-inspired neural network framework.<n>By utilizing reciprocal CBFs instead of zeroing CBFs, the proposed framework allows for the specification of flexible, user-defined safe regions.
- Score: 2.092779643426281
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As autonomous systems become increasingly prevalent in daily life, ensuring their safety is paramount. Control Barrier Functions (CBFs) have emerged as an effective tool for guaranteeing safety; however, manually designing them for specific applications remains a significant challenge. With the advent of deep learning techniques, recent research has explored synthesizing CBFs using neural networks-commonly referred to as neural CBFs. This paper introduces a novel class of neural CBFs that leverages a physics-inspired neural network framework by incorporating Zubov's Partial Differential Equation (PDE) within the context of safety. This approach provides a scalable methodology for synthesizing neural CBFs applicable to high-dimensional systems. Furthermore, by utilizing reciprocal CBFs instead of zeroing CBFs, the proposed framework allows for the specification of flexible, user-defined safe regions. To validate the effectiveness of the approach, we present case studies on three different systems: an inverted pendulum, autonomous ground navigation, and aerial navigation in obstacle-laden environments.
Related papers
- CP-NCBF: A Conformal Prediction-based Approach to Synthesize Verified Neural Control Barrier Functions [2.092779643426281]
Control Barrier Functions (CBFs) are a practical approach for designing safety-critical controllers.<n>Recent efforts have explored learning-based methods, such as neural CBFs, to address this issue.<n>We propose a novel framework that leverages split-conformal prediction to generate formally verified neural CBFs.
arXiv Detail & Related papers (2025-03-18T10:01:06Z) - Verification of Neural Control Barrier Functions with Symbolic Derivative Bounds Propagation [6.987300771372427]
We propose a new efficient verification framework for ReLU-based neural CBFs.
We show that the symbolic bounds can be propagated through the inner product of neural CBF Jacobian and nonlinear system dynamics.
arXiv Detail & Related papers (2024-10-04T21:42:25Z) - Building Hybrid B-Spline And Neural Network Operators [0.0]
Control systems are indispensable for ensuring the safety of cyber-physical systems (CPS)
We propose a novel strategy that combines the inductive bias of B-splines with data-driven neural networks to facilitate real-time predictions of CPS behavior.
arXiv Detail & Related papers (2024-06-06T21:54:59Z) - Fault Tolerant Neural Control Barrier Functions for Robotic Systems
under Sensor Faults and Attacks [6.314000948709254]
We study safety-critical control synthesis for robotic systems under sensor faults and attacks.
Our main contribution is the development and synthesis of a new class of CBFs that we term fault tolerant neural control barrier function (FT-NCBF)
arXiv Detail & Related papers (2024-02-28T19:44:19Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Learning To Estimate Regions Of Attraction Of Autonomous Dynamical
Systems Using Physics-Informed Neural Networks [0.0]
We train a neural network to estimate the region of attraction (ROA) of a controlled autonomous dynamical system.
This safety network can be used to quantify the relative safety of proposed control actions and prevent the selection of damaging actions.
In future work we intend to apply this technique to reinforcement learning agents during motor learning tasks.
arXiv Detail & Related papers (2021-11-18T19:58:47Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.