BarrierNet: A Safety-Guaranteed Layer for Neural Networks
- URL: http://arxiv.org/abs/2111.11277v1
- Date: Mon, 22 Nov 2021 15:38:11 GMT
- Title: BarrierNet: A Safety-Guaranteed Layer for Neural Networks
- Authors: Wei Xiao and Ramin Hasani and Xiao Li and Daniela Rus
- Abstract summary: BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
- Score: 50.86816322277293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces differentiable higher-order control barrier functions
(CBF) that are end-to-end trainable together with learning systems. CBFs are
usually overly conservative, while guaranteeing safety. Here, we address their
conservativeness by softening their definitions using environmental
dependencies without loosing safety guarantees, and embed them into
differentiable quadratic programs. These novel safety layers, termed a
BarrierNet, can be used in conjunction with any neural network-based
controller, and can be trained by gradient descent. BarrierNet allows the
safety constraints of a neural controller be adaptable to changing
environments. We evaluate them on a series of control problems such as traffic
merging and robot navigations in 2D and 3D space, and demonstrate their
effectiveness compared to state-of-the-art approaches.
Related papers
- Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints [50.920465513162334]
We introduce the PCBF algorithm to maximize the inner safe set of dynamical systems under input constraints.
We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system.
Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.
arXiv Detail & Related papers (2024-10-05T18:45:19Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Online Control Barrier Functions for Decentralized Multi-Agent
Navigation [15.876920170393168]
Control barrier functions (CBFs) enable safe multi-agent navigation in the continuous domain.
Traditional approaches consider fixed CBFs, where parameters are tuned apriori.
We propose online CBFs, whereby hyper parameters are tuned in real-time.
arXiv Detail & Related papers (2023-03-08T01:28:18Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Differentiable Safe Controller Design through Control Barrier Functions [8.283758049749782]
Learning-based controllers can show high empirical performance but lack formal safety guarantees.
Control barrier functions (CBFs) have been applied as a safety filter to monitor and modify the outputs of learning-based controllers.
We propose a safe-by-construction NN controller which employs differentiable CBF-based safety layers.
arXiv Detail & Related papers (2022-09-20T23:03:22Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Learning Differentiable Safety-Critical Control using Control Barrier
Functions for Generalization to Novel Environments [16.68313219331689]
Control barrier functions (CBFs) have become a popular tool to enforce safety of a control system.
We propose a differentiable optimization-based safety-critical control framework.
arXiv Detail & Related papers (2022-01-04T20:43:37Z) - Failing with Grace: Learning Neural Network Controllers that are
Boundedly Unsafe [18.34490939288318]
We consider the problem of learning a feed-forward neural network (NN) controller to safely steer an arbitrarily shaped robot in a compact workspace.
We propose an approach that lifts such assumptions on the data that are hard to satisfy in practice.
We provide a simulation study that verifies the efficacy of the proposed scheme.
arXiv Detail & Related papers (2021-06-22T15:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.