Leveraging Approximate Model-based Shielding for Probabilistic Safety
Guarantees in Continuous Environments
- URL: http://arxiv.org/abs/2402.00816v1
- Date: Thu, 1 Feb 2024 17:55:08 GMT
- Title: Leveraging Approximate Model-based Shielding for Probabilistic Safety
Guarantees in Continuous Environments
- Authors: Alexander W. Goodall, Francesco Belardinelli
- Abstract summary: We extend the approximate model-based shielding framework to the continuous setting.
In particular we use Safety Gym as our test-bed, allowing for a more direct comparison of AMBS with popular constrained RL algorithms.
- Score: 63.053364805943026
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shielding is a popular technique for achieving safe reinforcement learning
(RL). However, classical shielding approaches come with quite restrictive
assumptions making them difficult to deploy in complex environments,
particularly those with continuous state or action spaces. In this paper we
extend the more versatile approximate model-based shielding (AMBS) framework to
the continuous setting. In particular we use Safety Gym as our test-bed,
allowing for a more direct comparison of AMBS with popular constrained RL
algorithms. We also provide strong probabilistic safety guarantees for the
continuous setting. In addition, we propose two novel penalty techniques that
directly modify the policy gradient, which empirically provide more stable
convergence in our experiments.
Related papers
- Practical and Robust Safety Guarantees for Advanced Counterfactual Learning to Rank [64.44255178199846]
We generalize the existing safe CLTR approach to make it applicable to state-of-the-art doubly robust CLTR.
We also propose a novel approach, proximal ranking policy optimization (PRPO), that provides safety in deployment without assumptions about user behavior.
PRPO is the first method with unconditional safety in deployment that translates to robust safety for real-world applications.
arXiv Detail & Related papers (2024-07-29T12:23:59Z) - Iterative Reachability Estimation for Safe Reinforcement Learning [23.942701020636882]
We propose a new framework, Reachability Estimation for Safe Policy Optimization (RESPO), for safety-constrained reinforcement learning (RL) environments.
In the feasible set where there exist violation-free policies, we optimize for rewards while maintaining persistent safety.
We evaluate the proposed methods on a diverse suite of safe RL environments from Safety Gym, PyBullet, and MuJoCo.
arXiv Detail & Related papers (2023-09-24T02:36:42Z) - Approximate Model-Based Shielding for Safe Reinforcement Learning [83.55437924143615]
We propose a principled look-ahead shielding algorithm for verifying the performance of learned RL policies.
Our algorithm differs from other shielding approaches in that it does not require prior knowledge of the safety-relevant dynamics of the system.
We demonstrate superior performance to other safety-aware approaches on a set of Atari games with state-dependent safety-labels.
arXiv Detail & Related papers (2023-07-27T15:19:45Z) - Approximate Shielding of Atari Agents for Safe Exploration [83.55437924143615]
We propose a principled algorithm for safe exploration based on the concept of shielding.
We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations.
arXiv Detail & Related papers (2023-04-21T16:19:54Z) - Risk-Averse Model Uncertainty for Distributionally Robust Safe
Reinforcement Learning [3.9821399546174825]
We introduce a deep reinforcement learning framework for safe decision making in uncertain environments.
We provide robustness guarantees for this framework by showing it is equivalent to a specific class of distributionally robust safe reinforcement learning problems.
In experiments on continuous control tasks with safety constraints, we demonstrate that our framework produces robust performance and safety at deployment time across a range of perturbed test environments.
arXiv Detail & Related papers (2023-01-30T00:37:06Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Guiding Safe Exploration with Weakest Preconditions [15.469452301122177]
In reinforcement learning for safety-critical settings, it is desirable for the agent to obey safety constraints at all points in time.
We present a novel neurosymbolic approach called SPICE to solve this safe exploration problem.
arXiv Detail & Related papers (2022-09-28T14:58:41Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Safe Exploration in Model-based Reinforcement Learning using Control
Barrier Functions [1.005130974691351]
We develop a novel class of CBFs that retain the beneficial properties of CBFs for developing minimally-invasive safe control policies.
We show how these LCBFs can be used to augment a learning-based control policy so as to guarantee safety and then leverage this approach to develop a safe exploration framework.
arXiv Detail & Related papers (2021-04-16T15:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.