Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks
- URL: http://arxiv.org/abs/2207.00759v1
- Date: Sat, 2 Jul 2022 07:04:20 GMT
- Title: Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks
- Authors: Jiaxiang Liu, Yunhan Xing, Xiaomu Shi, Fu Song, Zhiwu Xu, Zhong Ming
- Abstract summary: Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
- Score: 9.85360493553261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a new programming paradigm, deep neural networks (DNNs) have been
increasingly deployed in practice, but the lack of robustness hinders their
applications in safety-critical domains. While there are techniques for
verifying DNNs with formal guarantees, they are limited in scalability and
accuracy. In this paper, we present a novel abstraction-refinement approach for
scalable and exact DNN verification. Specifically, we propose a novel
abstraction to break down the size of DNNs by over-approximation. The result of
verifying the abstract DNN is always conclusive if no spurious counterexample
is reported. To eliminate spurious counterexamples introduced by abstraction,
we propose a novel counterexample-guided refinement that refines the abstract
DNN to exclude a given spurious counterexample while still over-approximating
the original one. Our approach is orthogonal to and can be integrated with many
existing verification techniques. For demonstration, we implement our approach
using two promising and exact tools Marabou and Planet as the underlying
verification engines, and evaluate on widely-used benchmarks ACAS Xu, MNIST and
CIFAR-10. The results show that our approach can boost their performance by
solving more problems and reducing up to 86.3% and 78.0% verification time,
respectively. Compared to the most relevant abstraction-refinement approach,
our approach is 11.6-26.6 times faster.
Related papers
- Theoretically Achieving Continuous Representation of Oriented Bounding Boxes [64.15627958879053]
This paper endeavors to completely solve the issue of discontinuity in Oriented Bounding Box representation.
We propose a novel representation method called Continuous OBB (COBB) which can be readily integrated into existing detectors.
For fairness and transparency of experiments, we have developed a modularized benchmark based on the open-source deep learning framework Jittor's detection toolbox JDet for OOD evaluation.
arXiv Detail & Related papers (2024-02-29T09:27:40Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Taming Reachability Analysis of DNN-Controlled Systems via
Abstraction-Based Training [14.787056022080625]
This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis.
We extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training.
We devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states.
arXiv Detail & Related papers (2022-11-21T00:11:50Z) - Tighter Abstract Queries in Neural Network Verification [0.0]
We present CEGARETTE, a novel verification mechanism where both the system and the property are abstracted and refined simultaneously.
Our results are very promising, and demonstrate a significant improvement in performance over multiple benchmarks.
arXiv Detail & Related papers (2022-10-23T22:18:35Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Incremental Verification of Fixed-Point Implementations of Neural
Networks [0.19573380763700707]
We develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference.
Our approach was able to verify and produce examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods.
arXiv Detail & Related papers (2020-12-21T10:03:44Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Accelerating Robustness Verification of Deep Neural Networks Guided by
Target Labels [8.9960048245668]
Deep Neural Networks (DNNs) have become key components of many safety-critical applications such as autonomous driving and medical diagnosis.
DNNs suffer from poor robustness because of their susceptibility to adversarial examples such that small perturbations to an input result in misprediction.
We propose a novel approach that can accelerate the robustness verification techniques by guiding the verification with target labels.
arXiv Detail & Related papers (2020-07-16T00:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.