Realizable Continuous-Space Shields for Safe Reinforcement Learning
- URL: http://arxiv.org/abs/2410.02038v1
- Date: Wed, 2 Oct 2024 21:08:11 GMT
- Title: Realizable Continuous-Space Shields for Safe Reinforcement Learning
- Authors: Kyungmin Kim, Davide Corsi, Andoni Rodriguez, JB Lanier, Benjami Parellada, Pierre Baldi, Cesar Sanchez, Roy Fox,
- Abstract summary: Deep Reinforcement Learning (DRL) remains vulnerable to occasional catastrophic failures without additional safeguards.
One effective solution is to use a shield that validates and adjusts the agent's actions to ensure compliance with a provided set of safety specifications.
We propose the first shielding approach to automatically guarantee the realizability of safety requirements for continuous state and action spaces.
- Score: 13.728961635717134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. One effective solution to prevent these failures is to use a shield that validates and adjusts the agent's actions to ensure compliance with a provided set of safety specifications. For real-life robot domains, it is desirable to be able to define such safety specifications over continuous state and action spaces to accurately account for system dynamics and calculate new safe actions that minimally alter the agent's output. In this paper, we propose the first shielding approach to automatically guarantee the realizability of safety requirements for continuous state and action spaces. Realizability is an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can also be verified with a stateful shield, enabling the incorporation of non-Markovian safety requirements. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising policy accuracy by applying it to a navigation problem and a multi-agent particle environment.
Related papers
- Progressive Safeguards for Safe and Model-Agnostic Reinforcement Learning [5.593642806259113]
We model a meta-learning process where each task is synchronized with a safeguard that monitors safety and provides a reward signal to the agent.
The design of the safeguard is manual but it is high-level and model-agnostic, which gives rise to an end-to-end safe learning approach.
We evaluate our framework in a Minecraft-inspired Gridworld, a VizDoom game environment, and an LLM fine-tuning application.
arXiv Detail & Related papers (2024-10-31T16:28:33Z) - Nothing in Excess: Mitigating the Exaggerated Safety for LLMs via Safety-Conscious Activation Steering [56.92068213969036]
Safety alignment is indispensable for Large language models (LLMs) to defend threats from malicious instructions.
Recent researches reveal safety-aligned LLMs prone to reject benign queries due to the exaggerated safety issue.
We propose a Safety-Conscious Activation Steering (SCANS) method to mitigate the exaggerated safety concerns.
arXiv Detail & Related papers (2024-08-21T10:01:34Z) - Verification-Guided Shielding for Deep Reinforcement Learning [4.418183967223081]
Deep Reinforcement Learning (DRL) has emerged as an effective approach to solving real-world tasks.
Various methods have been put forth to address this issue by providing formal safety guarantees.
We present verification-guided shielding -- a novel approach that bridges the DRL reliability gap by integrating these two methods.
arXiv Detail & Related papers (2024-06-10T17:44:59Z) - Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Approximate Shielding of Atari Agents for Safe Exploration [83.55437924143615]
We propose a principled algorithm for safe exploration based on the concept of shielding.
We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations.
arXiv Detail & Related papers (2023-04-21T16:19:54Z) - ISAACS: Iterative Soft Adversarial Actor-Critic for Safety [0.9217021281095907]
This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems.
A safety-seeking fallback policy is co-trained with an adversarial "disturbance" agent that aims to invoke the worst-case realization of model error.
While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter.
arXiv Detail & Related papers (2022-12-06T18:53:34Z) - Provably Safe Reinforcement Learning via Action Projection using
Reachability Analysis and Polynomial Zonotopes [9.861651769846578]
We develop a safety shield for nonlinear continuous systems that solve reach-avoid tasks.
Our approach is called action projection and is implemented via mixed-integer optimization.
In contrast to other state of the art approaches for action projection, our safety shield can efficiently handle input constraints and obstacles.
arXiv Detail & Related papers (2022-10-19T16:06:12Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.