Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic
Environments
- URL: http://arxiv.org/abs/2303.14265v1
- Date: Fri, 24 Mar 2023 20:29:17 GMT
- Title: Safe and Sample-efficient Reinforcement Learning for Clustered Dynamic
Environments
- Authors: Hongyi Chen and Changliu Liu
- Abstract summary: This study proposes a safe and sample-efficient reinforcement learning (RL) framework to address two major challenges.
We use the safe set algorithm (SSA) to monitor and modify the nominal controls, and evaluate SSA+RL in a clustered dynamic environment.
Our framework can achieve better safety performance compare to other safe RL methods during training and solve the task with substantially fewer episodes.
- Score: 4.111899441919165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study proposes a safe and sample-efficient reinforcement learning (RL)
framework to address two major challenges in developing applicable RL
algorithms: satisfying safety constraints and efficiently learning with limited
samples. To guarantee safety in real-world complex environments, we use the
safe set algorithm (SSA) to monitor and modify the nominal controls, and
evaluate SSA+RL in a clustered dynamic environment which is challenging to be
solved by existing RL algorithms. However, the SSA+RL framework is usually not
sample-efficient especially in reward-sparse environments, which has not been
addressed in previous safe RL works. To improve the learning efficiency, we
propose three techniques: (1) avoiding behaving overly conservative by adapting
the SSA; (2) encouraging safe exploration using random network distillation
with safety constraints; (3) improving policy convergence by treating SSA as
expert demonstrations and directly learn from that. The experimental results
show that our framework can achieve better safety performance compare to other
safe RL methods during training and solve the task with substantially fewer
episodes. Project website: https://hychen-naza.github.io/projects/Safe_RL/.
Related papers
- ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning [48.536695794883826]
We present ActSafe, a novel model-based RL algorithm for safe and efficient exploration.
We show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time.
In addition, we propose a practical variant of ActSafe that builds on latest model-based RL advancements.
arXiv Detail & Related papers (2024-10-12T10:46:02Z) - Safety through Permissibility: Shield Construction for Fast and Safe Reinforcement Learning [57.84059344739159]
"Shielding" is a popular technique to enforce safety inReinforcement Learning (RL)
We propose a new permissibility-based framework to deal with safety and shield construction.
arXiv Detail & Related papers (2024-05-29T18:00:21Z) - Sampling-based Safe Reinforcement Learning for Nonlinear Dynamical
Systems [15.863561935347692]
We develop provably safe and convergent reinforcement learning algorithms for control of nonlinear dynamical systems.
Recent advances at the intersection of control and RL follow a two-stage, safety filter approach to enforcing hard safety constraints.
We develop a single-stage, sampling-based approach to hard constraint satisfaction that learns RL controllers enjoying classical convergence guarantees.
arXiv Detail & Related papers (2024-03-06T19:39:20Z) - Approximate Model-Based Shielding for Safe Reinforcement Learning [83.55437924143615]
We propose a principled look-ahead shielding algorithm for verifying the performance of learned RL policies.
Our algorithm differs from other shielding approaches in that it does not require prior knowledge of the safety-relevant dynamics of the system.
We demonstrate superior performance to other safety-aware approaches on a set of Atari games with state-dependent safety-labels.
arXiv Detail & Related papers (2023-07-27T15:19:45Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Adaptive Aggregation for Safety-Critical Control [3.1692938090731584]
We propose an adaptive aggregation framework for safety-critical control.
Our algorithm can achieve fewer safety violations while showing better data efficiency compared with several baselines.
arXiv Detail & Related papers (2023-02-07T16:53:33Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Provably Safe Reinforcement Learning: Conceptual Analysis, Survey, and
Benchmarking [12.719948223824483]
reinforcement learning (RL) algorithms are crucial to unlock their potential for many real-world tasks.
However, vanilla RL and most safe RL approaches do not guarantee safety.
We introduce a categorization of existing provably safe RL methods, present the conceptual foundations for both continuous and discrete action spaces, and empirically benchmark existing methods.
We provide practical guidance on selecting provably safe RL approaches depending on the safety specification, RL algorithm, and type of action space.
arXiv Detail & Related papers (2022-05-13T16:34:36Z) - Safe Policy Optimization with Local Generalized Linear Function
Approximations [17.84511819022308]
Existing safe exploration methods guaranteed safety under the assumption of regularity.
We propose a novel algorithm, SPO-LF, that optimize an agent's policy while learning the relation between a locally available feature obtained by sensors and environmental reward/safety.
We experimentally show that our algorithm is 1) more efficient in terms of sample complexity and computational cost and 2) more applicable to large-scale problems than previous safe RL methods with theoretical guarantees.
arXiv Detail & Related papers (2021-11-09T00:47:50Z) - Learning to be Safe: Deep RL with a Safety Critic [72.00568333130391]
A natural first approach toward safe RL is to manually specify constraints on the policy's behavior.
We propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors.
arXiv Detail & Related papers (2020-10-27T20:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.