Wasserstein Distributionally Robust Control Barrier Function using
Conditional Value-at-Risk with Differentiable Convex Programming
- URL: http://arxiv.org/abs/2309.08700v1
- Date: Fri, 15 Sep 2023 18:45:09 GMT
- Title: Wasserstein Distributionally Robust Control Barrier Function using
Conditional Value-at-Risk with Differentiable Convex Programming
- Authors: Alaa Eddine Chriat and Chuangchuang Sun
- Abstract summary: Control Barrier functions (CBFs) have attracted extensive attention for designing safe controllers for real-world safety-critical systems.
We present distributional robust CBF to achieve resilience under distributional shift.
We also provide an approximate variant of DR-CBF for higher-order systems.
- Score: 4.825619788907192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Control Barrier functions (CBFs) have attracted extensive attention for
designing safe controllers for their deployment in real-world safety-critical
systems. However, the perception of the surrounding environment is often
subject to stochasticity and further distributional shift from the nominal one.
In this paper, we present distributional robust CBF (DR-CBF) to achieve
resilience under distributional shift while keeping the advantages of CBF, such
as computational efficacy and forward invariance.
To achieve this goal, we first propose a single-level convex reformulation to
estimate the conditional value at risk (CVaR) of the safety constraints under
distributional shift measured by a Wasserstein metric, which is by nature
tri-level programming. Moreover, to construct a control barrier condition to
enforce the forward invariance of the CVaR, the technique of differentiable
convex programming is applied to enable differentiation through the
optimization layer of CVaR estimation. We also provide an approximate variant
of DR-CBF for higher-order systems. Simulation results are presented to
validate the chance-constrained safety guarantee under the distributional shift
in both first and second-order systems.
Related papers
- Domain Adaptive Safety Filters via Deep Operator Learning [5.62479170374811]
We propose a self-supervised deep operator learning framework that learns the mapping from environmental parameters to the corresponding CBF.
We demonstrate the effectiveness of the method through numerical experiments on navigation tasks involving dynamic obstacles.
arXiv Detail & Related papers (2024-10-18T15:10:55Z) - Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints [50.920465513162334]
We introduce the PCBF algorithm to maximize the inner safe set of dynamical systems under input constraints.
We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system.
Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.
arXiv Detail & Related papers (2024-10-05T18:45:19Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Learning Differentiable Safety-Critical Control using Control Barrier
Functions for Generalization to Novel Environments [16.68313219331689]
Control barrier functions (CBFs) have become a popular tool to enforce safety of a control system.
We propose a differentiable optimization-based safety-critical control framework.
arXiv Detail & Related papers (2022-01-04T20:43:37Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z) - Safe Wasserstein Constrained Deep Q-Learning [2.088376060651494]
This paper presents a distributionally robust Q-Learning algorithm (DrQ) which leverages Wasserstein ambiguity sets to provide idealistic probabilistic out-of-sample safety guarantees.
Using a case study of lithium-ion battery fast charging, we explore how idealistic safety guarantees translate to generally improved safety.
arXiv Detail & Related papers (2020-02-07T21:23:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.