Towards Scalable Verification of RL-Driven Systems
- URL: http://arxiv.org/abs/2105.11931v1
- Date: Tue, 25 May 2021 13:34:40 GMT
- Title: Towards Scalable Verification of RL-Driven Systems
- Authors: Guy Amir, Michael Schapira and Guy Katz
- Abstract summary: whiRL 2.0 is a tool that implements a new approach for verifying complex properties of interest for deep reinforcement learning systems.
We show how our proposed techniques provide insights into the inner workings and the generalizability of DRL systems.
- Score: 4.984294363450854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have gained significant popularity in recent
years, becoming the state of the art in a variety of domains. In particular,
deep reinforcement learning (DRL) has recently been employed to train DNNs that
act as control policies for various types of real-world systems. In this work,
we present the whiRL 2.0 tool, which implements a new approach for verifying
complex properties of interest for such DRL systems. To demonstrate the
benefits of whiRL 2.0, we apply it to case studies from the communication
networks domain that have recently been used to motivate formal verification of
DRL systems, and which exhibit characteristics that are conducive for scalable
verification. We propose techniques for performing k-induction and automated
invariant inference on such systems, and use these techniques for proving
safety and liveness properties of interest that were previously impossible to
verify due to the scalability barriers of prior approaches. Furthermore, we
show how our proposed techniques provide insights into the inner workings and
the generalizability of DRL systems. whiRL 2.0 is publicly available online.
Related papers
- ODRL: A Benchmark for Off-Dynamics Reinforcement Learning [59.72217833812439]
We introduce ODRL, the first benchmark tailored for evaluating off-dynamics RL methods.
ODRL contains four experimental settings where the source and target domains can be either online or offline.
We conduct extensive benchmarking experiments, which show that no method has universal advantages across varied dynamics shifts.
arXiv Detail & Related papers (2024-10-28T05:29:38Z) - A Survey for Deep Reinforcement Learning Based Network Intrusion Detection [3.493620624883548]
This paper explores the potential and challenges of using deep reinforcement learning (DRL) in network intrusion detection.
The performance of DRL models is analyzed, showing that while DRL holds promise, many recent technologies remain underexplored.
The paper concludes with recommendations for enhancing DRL deployment and testing in real-world network scenarios.
arXiv Detail & Related papers (2024-09-25T13:39:30Z) - D5RL: Diverse Datasets for Data-Driven Deep Reinforcement Learning [99.33607114541861]
We propose a new benchmark for offline RL that focuses on realistic simulations of robotic manipulation and locomotion environments.
Our proposed benchmark covers state-based and image-based domains, and supports both offline RL and online fine-tuning evaluation.
arXiv Detail & Related papers (2024-08-15T22:27:00Z) - Safe and Reliable Training of Learning-Based Aerospace Controllers [2.0159768535123557]
We present novel advancements in both the training and verification of DRL controllers.
We showcase a design-for-verification approach utilizing k-induction and demonstrate its use in verifying liveness properties.
We also give a brief overview of neural Lyapunov Barrier certificates and summarize their capabilities.
arXiv Detail & Related papers (2024-07-09T17:58:50Z) - Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - A Comprehensive Survey of Data Augmentation in Visual Reinforcement Learning [53.35317176453194]
Data augmentation (DA) has become a widely used technique in visual RL for acquiring sample-efficient and generalizable policies.
We present a principled taxonomy of the existing augmentation techniques used in visual RL and conduct an in-depth discussion on how to better leverage augmented data.
As the first comprehensive survey of DA in visual RL, this work is expected to offer valuable guidance to this emerging field.
arXiv Detail & Related papers (2022-10-10T11:01:57Z) - Verifying Learning-Based Robotic Navigation Systems [61.01217374879221]
We show how modern verification engines can be used for effective model selection.
Specifically, we use verification to detect and rule out policies that may demonstrate suboptimal behavior.
Our work is the first to demonstrate the use of verification backends for recognizing suboptimal DRL policies in real-world robots.
arXiv Detail & Related papers (2022-05-26T17:56:43Z) - Federated Deep Reinforcement Learning for the Distributed Control of
NextG Wireless Networks [16.12495409295754]
Next Generation (NextG) networks are expected to support demanding internet tactile applications such as augmented reality and connected autonomous vehicles.
Data-driven approaches can improve the ability of the network to adapt to the current operating conditions.
Deep RL (DRL) has been shown to achieve good performance even in complex environments.
arXiv Detail & Related papers (2021-12-07T03:13:20Z) - Learning on Abstract Domains: A New Approach for Verifiable Guarantee in
Reinforcement Learning [9.428825075908131]
We propose an abstraction-based approach to train DRL systems on finite abstract domains.
It yields neural networks whose input states are finite, making hosting DRL systems directly verifiable.
arXiv Detail & Related papers (2021-06-13T06:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.