Differentiable Safe Controller Design through Control Barrier Functions
- URL: http://arxiv.org/abs/2209.10034v1
- Date: Tue, 20 Sep 2022 23:03:22 GMT
- Title: Differentiable Safe Controller Design through Control Barrier Functions
- Authors: Shuo Yang, Shaoru Chen, Victor M. Preciado, Rahul Mangharam
- Abstract summary: Learning-based controllers can show high empirical performance but lack formal safety guarantees.
Control barrier functions (CBFs) have been applied as a safety filter to monitor and modify the outputs of learning-based controllers.
We propose a safe-by-construction NN controller which employs differentiable CBF-based safety layers.
- Score: 8.283758049749782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning-based controllers, such as neural network (NN) controllers, can show
high empirical performance but lack formal safety guarantees. To address this
issue, control barrier functions (CBFs) have been applied as a safety filter to
monitor and modify the outputs of learning-based controllers in order to
guarantee the safety of the closed-loop system. However, such modification can
be myopic with unpredictable long-term effects. In this work, we propose a
safe-by-construction NN controller which employs differentiable CBF-based
safety layers, and investigate the performance of safe-by-construction NN
controllers in learning-based control. Specifically, two formulations of
controllers are compared: one is projection-based and the other relies on our
proposed set-theoretic parameterization. Both methods demonstrate improved
closed-loop performance over using CBF as a separate safety filter in numerical
experiments.
Related papers
- Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints [50.920465513162334]
We introduce the PCBF algorithm to maximize the inner safe set of dynamical systems under input constraints.
We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system.
Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.
arXiv Detail & Related papers (2024-10-05T18:45:19Z) - Reinforcement Learning-based Receding Horizon Control using Adaptive Control Barrier Functions for Safety-Critical Systems [14.166970599802324]
Optimal control methods provide solutions to safety-critical problems but easily become intractable.
We propose a Reinforcement Learning-based Receding Horizon Control approach leveraging Model Predictive Control.
We validate our method by applying it to the challenging automated merging control problem for Connected and Automated Vehicles.
arXiv Detail & Related papers (2024-03-26T02:49:08Z) - Multi-Step Model Predictive Safety Filters: Reducing Chattering by
Increasing the Prediction Horizon [7.55113002732746]
Safety, the satisfaction of state and input constraints, can be guaranteed by augmenting the learned control policy with a safety filter.
Model predictive safety filters (MPSFs) are a common safety filtering approach based on model predictive control (MPC)
arXiv Detail & Related papers (2023-09-20T16:35:29Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.