Learning Control Barrier Functions and their application in Reinforcement Learning: A Survey
- URL: http://arxiv.org/abs/2404.16879v1
- Date: Mon, 22 Apr 2024 22:52:14 GMT
- Title: Learning Control Barrier Functions and their application in Reinforcement Learning: A Survey
- Authors: Maeva Guerrier, Hassan Fouad, Giovanni Beltrame,
- Abstract summary: Reinforcement learning is a powerful technique for developing new robot behaviors.
It aims to incorporate safety considerations, enabling faster transfer to real robots and facilitating lifelong learning.
One promising approach within safe reinforcement learning is the use of control barrier functions.
- Score: 11.180978323594822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning is a powerful technique for developing new robot behaviors. However, typical lack of safety guarantees constitutes a hurdle for its practical application on real robots. To address this issue, safe reinforcement learning aims to incorporate safety considerations, enabling faster transfer to real robots and facilitating lifelong learning. One promising approach within safe reinforcement learning is the use of control barrier functions. These functions provide a framework to ensure that the system remains in a safe state during the learning process. However, synthesizing control barrier functions is not straightforward and often requires ample domain knowledge. This challenge motivates the exploration of data-driven methods for automatically defining control barrier functions, which is highly appealing. We conduct a comprehensive review of the existing literature on safe reinforcement learning using control barrier functions. Additionally, we investigate various techniques for automatically learning the Control Barrier Functions, aiming to enhance the safety and efficacy of Reinforcement Learning in practical robot applications.
Related papers
- Handling Long-Term Safety and Uncertainty in Safe Reinforcement Learning [17.856459823003277]
Safety is one of the key issues preventing the deployment of reinforcement learning techniques in real-world robots.
In this paper, we bridge the gap by extending the safe exploration method, ATACOM, with learnable constraints.
arXiv Detail & Related papers (2024-09-18T15:08:41Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z) - Reinforcement Learning for Safe Robot Control using Control Lyapunov
Barrier Functions [9.690491406456307]
Reinforcement learning (RL) exhibits impressive performance when managing complicated control tasks for robots.
This paper explores the control Lyapunov barrier function (CLBF) to analyze the safety and reachability solely based on data.
We also proposed the Lyapunov barrier actor-critic (LBAC) to search for a controller that satisfies the data-based approximation of the safety and reachability conditions.
arXiv Detail & Related papers (2023-05-16T20:27:02Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Barrier Certified Safety Learning Control: When Sum-of-Square
Programming Meets Reinforcement Learning [0.0]
This work adopts control barrier functions over reinforcement learning, and proposes a compensated algorithm to completely maintain safety.
Compared to quadratic programming based reinforcement learning methods, our sum-of-squares programming based reinforcement learning has shown its superiority.
arXiv Detail & Related papers (2022-06-16T04:38:50Z) - Safe Model-Based Reinforcement Learning Using Robust Control Barrier
Functions [43.713259595810854]
An increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions.
In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework.
arXiv Detail & Related papers (2021-10-11T17:00:45Z) - Safe Learning in Robotics: From Learning-Based Control to Safe
Reinforcement Learning [3.9258421820410225]
We review the recent advances made in using machine learning to achieve safe decision making under uncertainties.
Our review includes: learning-based control approaches that safely improve performance by learning the uncertain dynamics.
We highlight some of the open challenges that will drive the field of robot learning in the coming years.
arXiv Detail & Related papers (2021-08-13T14:22:02Z) - Learning Robust Hybrid Control Barrier Functions for Uncertain Systems [68.30783663518821]
We propose robust hybrid control barrier functions as a means to synthesize control laws that ensure robust safety.
Based on this notion, we formulate an optimization problem for learning robust hybrid control barrier functions from data.
Our techniques allow us to safely expand the region of attraction of a compass gait walker that is subject to model uncertainty.
arXiv Detail & Related papers (2021-01-16T17:53:35Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.