Neural Network Verification using Residual Reasoning
- URL: http://arxiv.org/abs/2208.03083v1
- Date: Fri, 5 Aug 2022 10:39:04 GMT
- Title: Neural Network Verification using Residual Reasoning
- Authors: Yizhak Yisrael Elboher, Elazar Cohen, Guy Katz
- Abstract summary: We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing integration of neural networks as components in
mission-critical systems, there is an increasing need to ensure that
they satisfy various safety and liveness requirements. In recent
years, numerous sound and complete verification methods have been
proposed towards that end, but these typically suffer from severe
scalability limitations. Recent work has proposed enhancing such
verification techniques with abstraction-refinement capabilities,
which have been shown to boost scalability: instead of verifying a
large and complex network, the verifier constructs and then verifies
a much smaller network, whose correctness implies the correctness of
the original network. A shortcoming of such a scheme is that if
verifying the smaller network fails, the verifier needs to perform a
refinement step that increases the size of the network being
verified, and then start verifying the new network from scratch --
effectively ``wasting'' its earlier work on verifying the smaller
network. In this paper, we present an enhancement to
abstraction-based verification of neural networks, by using
\emph{residual reasoning}: the process of utilizing information
acquired when verifying an abstract network, in order to expedite
the verification of a refined network. In essence, the method allows
the verifier to store information about parts of the search space in
which the refined network is guaranteed to behave correctly, and
allows it to focus on areas where bugs might be discovered. We
implemented our approach as an extension to the Marabou verifier,
and obtained promising results.
Related papers
- Expediting Neural Network Verification via Network Reduction [4.8621567234713305]
We propose a network reduction technique as a pre-processing method prior to verification.
The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network.
We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools.
arXiv Detail & Related papers (2023-08-07T06:23:24Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - An Abstraction-Refinement Approach to Verifying Convolutional Neural
Networks [0.0]
We present the Cnn-Abs framework, which is aimed at the verification of convolutional networks.
The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem.
Cnn-Abs can significantly boost the performance of a state-of-the-art verification engine, reducing runtime by 15.7% on average.
arXiv Detail & Related papers (2022-01-06T08:57:43Z) - Online Verification of Deep Neural Networks under Domain or Weight Shift [2.512827436728378]
Existing verification methods are limited to relatively simple specifications and fixed networks.
We propose three types of techniques to accelerate the online verification of deep neural networks.
Experiment results show that our online verification algorithm is up to two orders of magnitude faster than existing verification algorithms.
arXiv Detail & Related papers (2021-06-24T02:38:27Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Verification of Neural Networks: Enhancing Scalability through Pruning [15.62342143633075]
We focus on enabling state-of-the-art verification tools to deal with neural networks of some practical interest.
We propose a new training pipeline based on network pruning with the goal of striking a balance between maintaining accuracy and robustness.
The results of our experiments with a portfolio of pruning algorithms and verification tools show that our approach is successful for the kind of networks we consider.
arXiv Detail & Related papers (2020-03-17T10:54:08Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.