A Comparative Study of Artificial Potential Fields and Reciprocal Control Barrier Function-based Safety Filters
- URL: http://arxiv.org/abs/2403.15743v2
- Date: Wed, 16 Apr 2025 08:37:28 GMT
- Title: A Comparative Study of Artificial Potential Fields and Reciprocal Control Barrier Function-based Safety Filters
- Authors: Ming Li, Zhiyong Sun,
- Abstract summary: We show that controllers designed by artificial potential fields (APFs) can be derived from reciprocal control barrier function quadratic program (RCBF-QP) safety filters.<n>We further generalize the APF-based controllers to more general scenarios without restricting the choice of auxiliary functions.
- Score: 10.525846641815788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we demonstrate that controllers designed by artificial potential fields (APFs) can be derived from reciprocal control barrier function quadratic program (RCBF-QP) safety filters. By integrating APFs within the RCBF-QP framework, we explicitly establish the relationship between these two approaches. Specifically, we first introduce the concepts of tightened control Lyapunov functions (T-CLFs) and tightened reciprocal control barrier functions (T-RCBFs), each of which incorporates a flexible auxiliary function. We then utilize an attractive potential field as a T-CLF to guide the nominal controller design, and a repulsive potential field as a T-RCBF to formulate an RCBF-QP safety filter. With appropriately chosen auxiliary functions, we show that controllers designed by APFs and those derived by RCBF-QP safety filters are equivalent. Based on this insight, we further generalize the APF-based controllers (equivalently, RCBF-QP safety filter-based controllers) to more general scenarios without restricting the choice of auxiliary functions. Finally, we present a collision avoidance example to clearly illustrate the connection and equivalence between the two methods.
Related papers
- Domain Adaptive Safety Filters via Deep Operator Learning [5.62479170374811]
We propose a self-supervised deep operator learning framework that learns the mapping from environmental parameters to the corresponding CBF.
We demonstrate the effectiveness of the method through numerical experiments on navigation tasks involving dynamic obstacles.
arXiv Detail & Related papers (2024-10-18T15:10:55Z) - Pareto Control Barrier Function for Inner Safe Set Maximization Under Input Constraints [50.920465513162334]
We introduce the PCBF algorithm to maximize the inner safe set of dynamical systems under input constraints.
We validate its effectiveness through comparison with Hamilton-Jacobi reachability for an inverted pendulum and through simulations on a 12-dimensional quadrotor system.
Results show that the PCBF consistently outperforms existing methods, yielding larger safe sets and ensuring safety under input constraints.
arXiv Detail & Related papers (2024-10-05T18:45:19Z) - Reinforcement Learning-based Receding Horizon Control using Adaptive Control Barrier Functions for Safety-Critical Systems [14.166970599802324]
Optimal control methods provide solutions to safety-critical problems but easily become intractable.
We propose a Reinforcement Learning-based Receding Horizon Control approach leveraging Model Predictive Control.
We validate our method by applying it to the challenging automated merging control problem for Connected and Automated Vehicles.
arXiv Detail & Related papers (2024-03-26T02:49:08Z) - Safe Neural Control for Non-Affine Control Systems with Differentiable
Control Barrier Functions [58.19198103790931]
This paper addresses the problem of safety-critical control for non-affine control systems.
It has been shown that optimizing quadratic costs subject to state and control constraints can be sub-optimally reduced to a sequence of quadratic programs (QPs) by using Control Barrier Functions (CBFs)
We incorporate higher-order CBFs into neural ordinary differential equation-based learning models as differentiable CBFs to guarantee safety for non-affine control systems.
arXiv Detail & Related papers (2023-09-06T05:35:48Z) - Differentiable Safe Controller Design through Control Barrier Functions [8.283758049749782]
Learning-based controllers can show high empirical performance but lack formal safety guarantees.
Control barrier functions (CBFs) have been applied as a safety filter to monitor and modify the outputs of learning-based controllers.
We propose a safe-by-construction NN controller which employs differentiable CBF-based safety layers.
arXiv Detail & Related papers (2022-09-20T23:03:22Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Learning Differentiable Safety-Critical Control using Control Barrier
Functions for Generalization to Novel Environments [16.68313219331689]
Control barrier functions (CBFs) have become a popular tool to enforce safety of a control system.
We propose a differentiable optimization-based safety-critical control framework.
arXiv Detail & Related papers (2022-01-04T20:43:37Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Safe Exploration in Model-based Reinforcement Learning using Control
Barrier Functions [1.005130974691351]
We develop a novel class of CBFs that retain the beneficial properties of CBFs for developing minimally-invasive safe control policies.
We show how these LCBFs can be used to augment a learning-based control policy so as to guarantee safety and then leverage this approach to develop a safe exploration framework.
arXiv Detail & Related papers (2021-04-16T15:29:58Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.