Assumption Generation for the Verification of Learning-Enabled
Autonomous Systems
- URL: http://arxiv.org/abs/2305.18372v1
- Date: Sat, 27 May 2023 23:30:27 GMT
- Title: Assumption Generation for the Verification of Learning-Enabled
Autonomous Systems
- Authors: Corina Pasareanu, Ravi Mangal, Divya Gopinath, and Huafeng Yu
- Abstract summary: We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties.
We illustrate our approach on a case study taken from the autonomous airplanes domain.
- Score: 7.580719272198119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing safety guarantees for autonomous systems is difficult as these
systems operate in complex environments that require the use of
learning-enabled components, such as deep neural networks (DNNs) for visual
perception. DNNs are hard to analyze due to their size (they can have thousands
or millions of parameters), lack of formal specifications (DNNs are typically
learnt from labeled data, in the absence of any formal requirements), and
sensitivity to small changes in the environment. We present an assume-guarantee
style compositional approach for the formal verification of system-level safety
properties of such autonomous systems. Our insight is that we can analyze the
system in the absence of the DNN perception components by automatically
synthesizing assumptions on the DNN behaviour that guarantee the satisfaction
of the required safety properties. The synthesized assumptions are the weakest
in the sense that they characterize the output sequences of all the possible
DNNs that, plugged into the autonomous system, guarantee the required safety
properties. The assumptions can be leveraged as run-time monitors over a
deployed DNN to guarantee the safety of the overall system; they can also be
mined to extract local specifications for use during training and testing of
DNNs. We illustrate our approach on a case study taken from the autonomous
airplanes domain that uses a complex DNN for perception.
Related papers
- Unifying Qualitative and Quantitative Safety Verification of DNN-Controlled Systems [18.049286149364075]
The rapid advance of deep reinforcement learning techniques enables the oversight of safety-critical systems through the utilization of Deep Neural Networks (DNNs)
Most of the existing verification approaches rely on qualitative approaches, predominantly employing reachability analysis.
We propose a novel framework for unifying both qualitative and quantitative safety verification problems.
arXiv Detail & Related papers (2024-04-02T09:31:51Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study [16.776221250574075]
We present a case study applying formal probabilistic analysis techniques to an experimental autonomous system.
We show how to leverage local, DNN-specific analyses as run-time guards to increase the safety of the overall system.
arXiv Detail & Related papers (2023-02-06T18:56:20Z) - Enhancing Deep Learning with Scenario-Based Override Rules: a Case Study [0.0]
Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit.
DNNs are highly opaque, and can behave in an unexpected manner when they encounter unfamiliar input.
One promising approach is by extending DNN-based systems with hand-crafted override rules.
arXiv Detail & Related papers (2023-01-19T15:06:32Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Zero-bias Deep Neural Network for Quickest RF Signal Surveillance [14.804498377638696]
The Internet of Things (IoT) is reshaping modern society by allowing a decent number of RF devices to connect and share information through RF channels.
This paper provides a deep learning framework for RF signal surveillance.
We jointly integrate the Deep Neural Networks (DNNs) and Quickest Detection (QD) to form a sequential signal surveillance scheme.
arXiv Detail & Related papers (2021-10-12T07:48:57Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.