Guided by Guardrails: Control Barrier Functions as Safety Instructors for Robotic Learning
- URL: http://arxiv.org/abs/2505.18858v1
- Date: Sat, 24 May 2025 20:29:08 GMT
- Title: Guided by Guardrails: Control Barrier Functions as Safety Instructors for Robotic Learning
- Authors: Maeva Guerrier, Karthik Soma, Hassan Fouad, Giovanni Beltrame,
- Abstract summary: Safety stands as the primary obstacle preventing the widespread adoption of learning-based robotic systems in our daily lives.<n>In this work, we introduce a novel approach that simulates these temporal effects by applying continuous negative rewards without episode termination.<n>We present three CBF-based approaches, each integrating traditional RL methods with Control Barrier Functions, guiding the agent to learn safe behavior.
- Score: 10.797457293404468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safety stands as the primary obstacle preventing the widespread adoption of learning-based robotic systems in our daily lives. While reinforcement learning (RL) shows promise as an effective robot learning paradigm, conventional RL frameworks often model safety by using single scalar negative rewards with immediate episode termination, failing to capture the temporal consequences of unsafe actions (e.g., sustained collision damage). In this work, we introduce a novel approach that simulates these temporal effects by applying continuous negative rewards without episode termination. Our experiments reveal that standard RL methods struggle with this model, as the accumulated negative values in unsafe zones create learning barriers. To address this challenge, we demonstrate how Control Barrier Functions (CBFs), with their proven safety guarantees, effectively help robots avoid catastrophic regions while enhancing learning outcomes. We present three CBF-based approaches, each integrating traditional RL methods with Control Barrier Functions, guiding the agent to learn safe behavior. Our empirical analysis, conducted in both simulated environments and real-world settings using a four-wheel differential drive robot, explores the possibilities of employing these approaches for safe robotic learning.
Related papers
- Safely Learning Controlled Stochastic Dynamics [61.82896036131116]
We introduce a method that ensures safe exploration and efficient estimation of system dynamics.<n>After training, the learned model enables predictions of the system's dynamics and permits safety verification of any given control.<n>We provide theoretical guarantees for safety and derive adaptive learning rates that improve with increasing Sobolev regularity of the true dynamics.
arXiv Detail & Related papers (2025-06-03T11:17:07Z) - World Models for Anomaly Detection during Model-Based Reinforcement Learning Inference [3.591122855617648]
Learning-based controllers are often purposefully kept out of real-world applications due to concerns about their safety and reliability.<n>We explore how state-of-the-art world models in Model-Based Reinforcement Learning can be utilized beyond the training phase to ensure a deployed policy only operates within regions of the state-space it is sufficiently familiar with.
arXiv Detail & Related papers (2025-03-04T12:25:01Z) - ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning [48.536695794883826]
We present ActSafe, a novel model-based RL algorithm for safe and efficient exploration.<n>We show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time.<n>In addition, we propose a practical variant of ActSafe that builds on latest model-based RL advancements.
arXiv Detail & Related papers (2024-10-12T10:46:02Z) - Safety-Driven Deep Reinforcement Learning Framework for Cobots: A Sim2Real Approach [1.0488553716155147]
This study presents a novel methodology incorporating safety constraints into a robotic simulation during the training of deep reinforcement learning (DRL)
The framework integrates specific parts of the safety requirements, such as velocity constraints, directly within the DRL model.
The proposed approach outperforms the conventional method by a 16.5% average success rate on the tested scenarios.
arXiv Detail & Related papers (2024-07-02T12:56:17Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Reinforcement Learning for Safe Robot Control using Control Lyapunov
Barrier Functions [9.690491406456307]
Reinforcement learning (RL) exhibits impressive performance when managing complicated control tasks for robots.
This paper explores the control Lyapunov barrier function (CLBF) to analyze the safety and reachability solely based on data.
We also proposed the Lyapunov barrier actor-critic (LBAC) to search for a controller that satisfies the data-based approximation of the safety and reachability conditions.
arXiv Detail & Related papers (2023-05-16T20:27:02Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Safe Reinforcement Learning Using Black-Box Reachability Analysis [20.875010584486812]
Reinforcement learning (RL) is capable of sophisticated motion planning and control for robots in uncertain environments.
To justify widespread deployment, robots must respect safety constraints without sacrificing performance.
We propose a Black-box Reachability-based Safety Layer (BRSL) with three main components.
arXiv Detail & Related papers (2022-04-15T10:51:09Z) - Safe Model-Based Reinforcement Learning Using Robust Control Barrier
Functions [43.713259595810854]
An increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions.
In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework.
arXiv Detail & Related papers (2021-10-11T17:00:45Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.