Reachability Verification Based Reliability Assessment for Deep
Reinforcement Learning Controlled Robotics and Autonomous Systems
- URL: http://arxiv.org/abs/2210.14991v2
- Date: Mon, 29 Jan 2024 21:25:02 GMT
- Title: Reachability Verification Based Reliability Assessment for Deep
Reinforcement Learning Controlled Robotics and Autonomous Systems
- Authors: Yi Dong, Xingyu Zhao, Sen Wang, Xiaowei Huang
- Abstract summary: Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RAS)
A key challenge to its deployment in real-life operations is the presence of spuriously unsafe DRL policies.
This paper proposes a novel quantitative reliability assessment framework for DRL-controlled RAS.
- Score: 17.679681019347065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Reinforcement Learning (DRL) has achieved impressive performance in
robotics and autonomous systems (RAS). A key challenge to its deployment in
real-life operations is the presence of spuriously unsafe DRL policies.
Unexplored states may lead the agent to make wrong decisions that could result
in hazards, especially in applications where DRL-trained end-to-end controllers
govern the behaviour of RAS. This paper proposes a novel quantitative
reliability assessment framework for DRL-controlled RAS, leveraging
verification evidence generated from formal reliability analysis of neural
networks. A two-level verification framework is introduced to check the safety
property with respect to inaccurate observations that are due to, e.g.,
environmental noise and state changes. Reachability verification tools are
leveraged locally to generate safety evidence of trajectories. In contrast, at
the global level, we quantify the overall reliability as an aggregated metric
of local safety evidence, corresponding to a set of distinct tasks and their
occurrence probabilities. The effectiveness of the proposed verification
framework is demonstrated and validated via experiments on real RAS.
Related papers
- Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Safety Margins for Reinforcement Learning [74.13100479426424]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Safe Deep Reinforcement Learning by Verifying Task-Level Properties [84.64203221849648]
Cost functions are commonly employed in Safe Deep Reinforcement Learning (DRL)
The cost is typically encoded as an indicator function due to the difficulty of quantifying the risk of policy decisions in the state space.
In this paper, we investigate an alternative approach that uses domain knowledge to quantify the risk in the proximity of such states by defining a violation metric.
arXiv Detail & Related papers (2023-02-20T15:24:06Z) - Online Safety Property Collection and Refinement for Safe Deep
Reinforcement Learning in Mapless Navigation [79.89605349842569]
We introduce the Collection and Refinement of Online Properties (CROP) framework to design properties at training time.
CROP employs a cost signal to identify unsafe interactions and use them to shape safety properties.
We evaluate our approach in several robotic mapless navigation tasks and demonstrate that the violation metric computed with CROP allows higher returns and lower violations over previous Safe DRL approaches.
arXiv Detail & Related papers (2023-02-13T21:19:36Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Dependability Analysis of Deep Reinforcement Learning based Robotics and
Autonomous Systems [10.499662874457998]
Black-box nature of Deep Reinforcement Learning (DRL) and uncertain deployment-environments of Robotics pose new challenges on its dependability.
In this paper, we define a set of dependability properties in temporal logic and construct a Discrete-Time Markov Chain (DTMC) to model the dynamics of risk/failures of a DRL-driven RAS.
Our experimental results show that the proposed method is effective as a holistic assessment framework, while uncovers conflicts between the properties that may need trade-offs in the training.
arXiv Detail & Related papers (2021-09-14T08:42:29Z) - Lyapunov-based uncertainty-aware safe reinforcement learning [0.0]
InReinforcement learning (RL) has shown a promising performance in learning optimal policies for a variety of sequential decision-making tasks.
In many real-world RL problems, besides optimizing the main objectives, the agent is expected to satisfy a certain level of safety.
We propose a Lyapunov-based uncertainty-aware safe RL model to address these limitations.
arXiv Detail & Related papers (2021-07-29T13:08:15Z) - Scalable Synthesis of Verified Controllers in Deep Reinforcement
Learning [0.0]
We propose an automated verification pipeline capable of synthesizing high-quality safety shields.
Our key insight involves separating safety verification from neural controller, using pre-computed verified safety shields to constrain neural controller training.
Experimental results over a range of realistic high-dimensional deep RL benchmarks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2021-04-20T19:30:29Z) - Safety Verification of Model Based Reinforcement Learning Controllers [7.407039316561176]
We present a novel safety verification framework for model-based RL controllers using reachable set analysis.
The proposed frame-work can efficiently handle models and controllers which are represented using neural networks.
arXiv Detail & Related papers (2020-10-21T03:35:28Z) - Runtime Safety Assurance Using Reinforcement Learning [37.61747231296097]
This paper aims to design a meta-controller capable of identifying unsafe situations with high accuracy.
We frame the design of RTSA with the Markov decision process (MDP) and use reinforcement learning (RL) to solve it.
arXiv Detail & Related papers (2020-10-20T20:54:46Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.