Safe Reinforcement Learning Using Robust Action Governor
- URL: http://arxiv.org/abs/2102.10643v1
- Date: Sun, 21 Feb 2021 16:50:17 GMT
- Title: Safe Reinforcement Learning Using Robust Action Governor
- Authors: Yutong Li, Nan Li, H. Eric Tseng, Anouck Girard, Dimitar Filev, Ilya
Kolmanovsky
- Abstract summary: Reinforcement Learning (RL) is essentially a trial-and-error learning procedure which may cause unsafe behavior during the exploration-and-exploitation process.
In this paper, we introduce a framework for safe RL that is based on integration of an RL algorithm with an add-on safety supervision module.
We illustrate this proposed safe RL framework through an application to automotive adaptive cruise control.
- Score: 6.833157102376731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) is essentially a trial-and-error learning
procedure which may cause unsafe behavior during the
exploration-and-exploitation process. This hinders the applications of RL to
real-world control problems, especially to those for safety-critical systems.
In this paper, we introduce a framework for safe RL that is based on
integration of an RL algorithm with an add-on safety supervision module, called
the Robust Action Governor (RAG), which exploits set-theoretic techniques and
online optimization to manage safety-related requirements during learning. We
illustrate this proposed safe RL framework through an application to automotive
adaptive cruise control.
Related papers
- Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Reinforcement Learning with Adaptive Control Regularization for Safe Control of Critical Systems [2.126171264016785]
Reinforcement Learning (RL) is a powerful method for controlling dynamic systems, but its learning mechanism can lead to unpredictable actions that undermine the safety of critical systems.
We propose Adaptive Control Regularization (RL-ACR), an algorithm that enables safe RL exploration by combining the RL policy with a policy regularizer that hard-codes safety constraints.
In a series of critical control applications, we demonstrate that RL-ACR ensures safety during training while achieving the performance standards of model-free RL approaches that disregard safety.
arXiv Detail & Related papers (2024-04-23T16:35:14Z) - Approximate Model-Based Shielding for Safe Reinforcement Learning [83.55437924143615]
We propose a principled look-ahead shielding algorithm for verifying the performance of learned RL policies.
Our algorithm differs from other shielding approaches in that it does not require prior knowledge of the safety-relevant dynamics of the system.
We demonstrate superior performance to other safety-aware approaches on a set of Atari games with state-dependent safety-labels.
arXiv Detail & Related papers (2023-07-27T15:19:45Z) - OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning
Research [3.0536277689386453]
We introduce a foundational framework designed to expedite SafeRL research endeavors.
Our framework encompasses an array of algorithms spanning different RL domains and places heavy emphasis on safety elements.
arXiv Detail & Related papers (2023-05-16T09:22:14Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Safe Reinforcement Learning using Data-Driven Predictive Control [0.5459797813771499]
We propose a data-driven safety layer that acts as a filter for unsafe actions.
The safety layer penalizes the RL agent if the proposed action is unsafe and replaces it with the closest safe one.
In a simulation, we show that our method outperforms state-of-the-art safe RL methods on the robotics navigation problem.
arXiv Detail & Related papers (2022-11-20T17:10:40Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Model-Based Safe Reinforcement Learning with Time-Varying State and
Control Constraints: An Application to Intelligent Vehicles [13.40143623056186]
This paper proposes a safe RL algorithm for optimal control of nonlinear systems with time-varying state and control constraints.
A multi-step policy evaluation mechanism is proposed to predict the policy's safety risk under time-varying safety constraints and guide the policy to update safely.
The proposed algorithm outperforms several state-of-the-art RL algorithms in the simulated Safety Gym environment.
arXiv Detail & Related papers (2021-12-18T10:45:31Z) - Safe Model-Based Reinforcement Learning Using Robust Control Barrier
Functions [43.713259595810854]
An increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions.
In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework.
arXiv Detail & Related papers (2021-10-11T17:00:45Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.