Safety Aware Reinforcement Learning (SARL)
- URL: http://arxiv.org/abs/2010.02846v1
- Date: Tue, 6 Oct 2020 16:08:28 GMT
- Title: Safety Aware Reinforcement Learning (SARL)
- Authors: Santiago Miret, Somdeb Majumdar, Carroll Wainwright
- Abstract summary: We focus on researching scenarios where agents can cause undesired side effects while executing a policy on a primary task.
Since one can define multiple tasks for a given environment dynamics, there are two important challenges.
We propose Safety Aware Reinforcement Learning (SARL) - a framework where a virtual safe agent modulates the actions of a main reward-based agent to minimize side effects.
- Score: 4.4617911035181095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As reinforcement learning agents become increasingly integrated into complex,
real-world environments, designing for safety becomes a critical consideration.
We specifically focus on researching scenarios where agents can cause undesired
side effects while executing a policy on a primary task. Since one can define
multiple tasks for a given environment dynamics, there are two important
challenges. First, we need to abstract the concept of safety that applies
broadly to that environment independent of the specific task being executed.
Second, we need a mechanism for the abstracted notion of safety to modulate the
actions of agents executing different policies to minimize their side-effects.
In this work, we propose Safety Aware Reinforcement Learning (SARL) - a
framework where a virtual safe agent modulates the actions of a main
reward-based agent to minimize side effects. The safe agent learns a
task-independent notion of safety for a given environment. The main agent is
then trained with a regularization loss given by the distance between the
native action probabilities of the two agents. Since the safe agent effectively
abstracts a task-independent notion of safety via its action probabilities, it
can be ported to modulate multiple policies solving different tasks within the
given environment without further training. We contrast this with solutions
that rely on task-specific regularization metrics and test our framework on the
SafeLife Suite, based on Conway's Game of Life, comprising a number of complex
tasks in dynamic environments. We show that our solution is able to match the
performance of solutions that rely on task-specific side-effect penalties on
both the primary and safety objectives while additionally providing the benefit
of generalizability and portability.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Multi-Agent Reinforcement Learning with Control-Theoretic Safety Guarantees for Dynamic Network Bridging [0.11249583407496219]
This work introduces a hybrid approach that integrates Multi-Agent Reinforcement Learning with control-theoretic methods to ensure safe and efficient distributed strategies.
Our contributions include a novel setpoint update algorithm that dynamically adjusts agents' positions to preserve safety conditions without compromising the mission's objectives.
arXiv Detail & Related papers (2024-04-02T01:30:41Z) - Uniformly Safe RL with Objective Suppression for Multi-Constraint Safety-Critical Applications [73.58451824894568]
The widely adopted CMDP model constrains the risks in expectation, which makes room for dangerous behaviors in long-tail states.
In safety-critical domains, such behaviors could lead to disastrous outcomes.
We propose Objective Suppression, a novel method that adaptively suppresses the task reward maximizing objectives according to a safety critic.
arXiv Detail & Related papers (2024-02-23T23:22:06Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Safe Reinforcement Learning with Dead-Ends Avoidance and Recovery [13.333197887318168]
Safety is one of the main challenges in applying reinforcement learning to realistic environmental tasks.
We propose a method to construct a boundary that discriminates safe and unsafe states.
Our approach has better task performance with less safety violations than state-of-the-art algorithms.
arXiv Detail & Related papers (2023-06-24T12:02:50Z) - Safety-Constrained Policy Transfer with Successor Features [19.754549649781644]
We propose a Constrained Markov Decision Process (CMDP) formulation that enables the transfer of policies and adherence to safety constraints.
Our approach relies on a novel extension of generalized policy improvement to constrained settings via a Lagrangian formulation.
Our experiments in simulated domains show that our approach is effective; it visits unsafe states less frequently and outperforms alternative state-of-the-art methods when taking safety constraints into account.
arXiv Detail & Related papers (2022-11-10T06:06:36Z) - Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and
Generalization Guarantees [7.6347172725540995]
Safety is a critical component of autonomous systems and remains a challenge for learning-based policies to be utilized in the real world.
We propose Sim-to-Lab-to-Real to bridge the reality gap with a probabilistically guaranteed safety-aware policy distribution.
arXiv Detail & Related papers (2022-01-20T18:41:01Z) - MESA: Offline Meta-RL for Safe Adaptation and Fault Tolerance [73.3242641337305]
Recent work learns risk measures which measure the probability of violating constraints, which can then be used to enable safety.
We cast safe exploration as an offline meta-RL problem, where the objective is to leverage examples of safe and unsafe behavior across a range of environments.
We then propose MEta-learning for Safe Adaptation (MESA), an approach for meta-learning Simulation a risk measure for safe RL.
arXiv Detail & Related papers (2021-12-07T08:57:35Z) - Learning to Be Cautious [71.9871661858886]
A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations.
We present a sequence of tasks where cautious behavior becomes increasingly non-obvious, as well as an algorithm to demonstrate that it is possible for a system to emphlearn to be cautious.
arXiv Detail & Related papers (2021-10-29T16:52:45Z) - DESTA: A Framework for Safe Reinforcement Learning with Markov Games of
Intervention [17.017957942831938]
Current approaches for tackling safe learning in reinforcement learning (RL) lead to a trade-off between safe exploration and fulfilling the task.
We introduce a new two-player framework for safe RL called Distributive Exploration Safety Training Algorithm (DESTA)
Our approach uses a new two-player framework for safe RL called Distributive Exploration Safety Training Algorithm (DESTA)
arXiv Detail & Related papers (2021-10-27T14:35:00Z) - Safe Reinforcement Learning via Curriculum Induction [94.67835258431202]
In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly.
Existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations.
This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor.
arXiv Detail & Related papers (2020-06-22T10:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.