On the Optimality, Stability, and Feasibility of Control Barrier
Functions: An Adaptive Learning-Based Approach
- URL: http://arxiv.org/abs/2305.03608v1
- Date: Fri, 5 May 2023 15:11:28 GMT
- Title: On the Optimality, Stability, and Feasibility of Control Barrier
Functions: An Adaptive Learning-Based Approach
- Authors: Alaa Eddine Chriat and Chuangchuang Sun
- Abstract summary: Control barrier function (CBF) and its variants have attracted extensive attention for safety-critical control.
There are still fundamental limitations of current CBFs: optimality, stability, and feasibility.
We propose Adaptive Multi-step Control Barrier Function (AM-CBF), where we parameterize the class-$mathcalK$ function by a neural network and train it together with the reinforcement learning policy.
- Score: 4.399563188884702
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Safety has been a critical issue for the deployment of learning-based
approaches in real-world applications. To address this issue, control barrier
function (CBF) and its variants have attracted extensive attention for
safety-critical control. However, due to the myopic one-step nature of CBF and
the lack of principled methods to design the class-$\mathcal{K}$ functions,
there are still fundamental limitations of current CBFs: optimality, stability,
and feasibility. In this paper, we proposed a novel and unified approach to
address these limitations with Adaptive Multi-step Control Barrier Function
(AM-CBF), where we parameterize the class-$\mathcal{K}$ function by a neural
network and train it together with the reinforcement learning policy. Moreover,
to mitigate the myopic nature, we propose a novel \textit{multi-step training
and single-step execution} paradigm to make CBF farsighted while the execution
remains solving a single-step convex quadratic program. Our method is evaluated
on the first and second-order systems in various scenarios, where our approach
outperforms the conventional CBF both qualitatively and quantitatively.
Related papers
- Domain Adaptive Safety Filters via Deep Operator Learning [5.62479170374811]
We propose a self-supervised deep operator learning framework that learns the mapping from environmental parameters to the corresponding CBF.
We demonstrate the effectiveness of the method through numerical experiments on navigation tasks involving dynamic obstacles.
arXiv Detail & Related papers (2024-10-18T15:10:55Z) - One-Shot Safety Alignment for Large Language Models via Optimal Dualization [64.52223677468861]
This paper presents a perspective of dualization that reduces constrained alignment to an equivalent unconstrained alignment problem.
We do so by pre-optimizing a smooth and convex dual function that has a closed form.
Our strategy leads to two practical algorithms in model-based and preference-based settings.
arXiv Detail & Related papers (2024-05-29T22:12:52Z) - Reinforcement Learning-based Receding Horizon Control using Adaptive Control Barrier Functions for Safety-Critical Systems [14.166970599802324]
Optimal control methods provide solutions to safety-critical problems but easily become intractable.
We propose a Reinforcement Learning-based Receding Horizon Control approach leveraging Model Predictive Control.
We validate our method by applying it to the challenging automated merging control problem for Connected and Automated Vehicles.
arXiv Detail & Related papers (2024-03-26T02:49:08Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Learning Feasibility Constraints for Control Barrier Functions [8.264868845642843]
We employ machine learning techniques to ensure the feasibility of Quadratic Programs (QPs)
We propose a sampling-based learning approach to learn a new feasibility constraint for CBFs.
We demonstrate the advantages of the proposed learning approach to constrained optimal control problems.
arXiv Detail & Related papers (2023-03-10T16:29:20Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Safe Exploration in Model-based Reinforcement Learning using Control
Barrier Functions [1.005130974691351]
We develop a novel class of CBFs that retain the beneficial properties of CBFs for developing minimally-invasive safe control policies.
We show how these LCBFs can be used to augment a learning-based control policy so as to guarantee safety and then leverage this approach to develop a safe exploration framework.
arXiv Detail & Related papers (2021-04-16T15:29:58Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.