Learning Neural Control Barrier Functions from Expert Demonstrations using Inverse Constraint Learning
- URL: http://arxiv.org/abs/2510.21560v1
- Date: Fri, 24 Oct 2025 15:20:34 GMT
- Title: Learning Neural Control Barrier Functions from Expert Demonstrations using Inverse Constraint Learning
- Authors: Yuxuan Yang, Hussein Sibai,
- Abstract summary: We train a constraint function that classifies the states of the system under consideration to safe, i.e., belong to a controlled forward invariant set.<n>We then use that function to label a new set of simulated trajectories to train our neural CBF.<n>We empirically evaluate our approach in four different environments, demonstrating that it outperforms existing baselines.
- Score: 1.7235805533661985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Safety is a fundamental requirement for autonomous systems operating in critical domains. Control barrier functions (CBFs) have been used to design safety filters that minimally alter nominal controls for such systems to maintain their safety. Learning neural CBFs has been proposed as a data-driven alternative for their computationally expensive optimization-based synthesis. However, it is often the case that the failure set of states that should be avoided is non-obvious or hard to specify formally, e.g., tailgating in autonomous driving, while a set of expert demonstrations that achieve the task and avoid the failure set is easier to generate. We use ICL to train a constraint function that classifies the states of the system under consideration to safe, i.e., belong to a controlled forward invariant set that is disjoint from the unspecified failure set, and unsafe ones, i.e., belong to the complement of that set. We then use that function to label a new set of simulated trajectories to train our neural CBF. We empirically evaluate our approach in four different environments, demonstrating that it outperforms existing baselines and achieves comparable performance to a neural CBF trained with the same data but annotated with ground-truth safety labels.
Related papers
- BarrierSteer: LLM Safety via Learning Barrier Steering [83.12893815611052]
BarrierSteer is a novel framework that formalizes safety by embedding learned non-linear safety constraints directly into the model's latent representation space.<n>We show that BarrierSteer substantially reduces adversarial success rates, decreases unsafe generations, and outperforms existing methods.
arXiv Detail & Related papers (2026-02-23T18:19:46Z) - CP-NCBF: A Conformal Prediction-based Approach to Synthesize Verified Neural Control Barrier Functions [2.092779643426281]
Control Barrier Functions (CBFs) are a practical approach for designing safety-critical controllers.<n>Recent efforts have explored learning-based methods, such as neural CBFs, to address this issue.<n>We propose a novel framework that leverages split-conformal prediction to generate formally verified neural CBFs.
arXiv Detail & Related papers (2025-03-18T10:01:06Z) - Learning Performance-Oriented Control Barrier Functions Under Complex Safety Constraints and Limited Actuation [5.62479170374811]
Control Barrier Functions (CBFs) provide an elegant framework for constraining nonlinear control system dynamics.
We introduce a novel self-supervised learning framework to comprehensively address these challenges.
We validate our approach on a 2D double integrator (DI) system and a 7D fixed-wing aircraft system.
arXiv Detail & Related papers (2024-01-11T02:51:49Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.