Expediting Neural Network Verification via Network Reduction
- URL: http://arxiv.org/abs/2308.03330v2
- Date: Mon, 14 Aug 2023 09:16:34 GMT
- Title: Expediting Neural Network Verification via Network Reduction
- Authors: Yuyi Zhong, Ruiwei Wang, Siau-Cheng Khoo
- Abstract summary: We propose a network reduction technique as a pre-processing method prior to verification.
The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network.
We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools.
- Score: 4.8621567234713305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A wide range of verification methods have been proposed to verify the safety
properties of deep neural networks ensuring that the networks function
correctly in critical applications. However, many well-known verification tools
still struggle with complicated network architectures and large network sizes.
In this work, we propose a network reduction technique as a pre-processing
method prior to verification. The proposed method reduces neural networks via
eliminating stable ReLU neurons, and transforming them into a sequential neural
network consisting of ReLU and Affine layers which can be handled by the most
verification tools. We instantiate the reduction technique on the
state-of-the-art complete and incomplete verification tools, including
alpha-beta-crown, VeriNet and PRIMA. Our experiments on a large set of
benchmarks indicate that the proposed technique can significantly reduce neural
networks and speed up existing verification tools. Furthermore, the experiment
results also show that network reduction can improve the availability of
existing verification tools on many networks by reducing them into sequential
neural networks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Neural Network Verification using Residual Reasoning [0.0]
We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
arXiv Detail & Related papers (2022-08-05T10:39:04Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Towards Repairing Neural Networks Correctly [6.600380575920419]
We propose a runtime verification method to ensure the correctness of neural networks.
Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties.
arXiv Detail & Related papers (2020-12-03T12:31:07Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Verification of Neural Networks: Enhancing Scalability through Pruning [15.62342143633075]
We focus on enabling state-of-the-art verification tools to deal with neural networks of some practical interest.
We propose a new training pipeline based on network pruning with the goal of striking a balance between maintaining accuracy and robustness.
The results of our experiments with a portfolio of pruning algorithms and verification tools show that our approach is successful for the kind of networks we consider.
arXiv Detail & Related papers (2020-03-17T10:54:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.