Verification of Neural Networks: Enhancing Scalability through Pruning
- URL: http://arxiv.org/abs/2003.07636v1
- Date: Tue, 17 Mar 2020 10:54:08 GMT
- Title: Verification of Neural Networks: Enhancing Scalability through Pruning
- Authors: Dario Guidotti and Francesco Leofante and Luca Pulina and Armando
Tacchella
- Abstract summary: We focus on enabling state-of-the-art verification tools to deal with neural networks of some practical interest.
We propose a new training pipeline based on network pruning with the goal of striking a balance between maintaining accuracy and robustness.
The results of our experiments with a portfolio of pruning algorithms and verification tools show that our approach is successful for the kind of networks we consider.
- Score: 15.62342143633075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verification of deep neural networks has witnessed a recent surge of
interest, fueled by success stories in diverse domains and by abreast concerns
about safety and security in envisaged applications. Complexity and sheer size
of such networks are challenging for automated formal verification techniques
which, on the other hand, could ease the adoption of deep networks in safety-
and security-critical contexts.
In this paper we focus on enabling state-of-the-art verification tools to
deal with neural networks of some practical interest. We propose a new training
pipeline based on network pruning with the goal of striking a balance between
maintaining accuracy and robustness while making the resulting networks
amenable to formal analysis. The results of our experiments with a portfolio of
pruning algorithms and verification tools show that our approach is successful
for the kind of networks we consider and for some combinations of pruning and
verification techniques, thus bringing deep neural networks closer to the reach
of formally-grounded methods.
Related papers
- Advancing Security in AI Systems: A Novel Approach to Detecting
Backdoors in Deep Neural Networks [3.489779105594534]
backdoors can be exploited by malicious actors on deep neural networks (DNNs) and cloud services for data processing.
Our approach leverages advanced tensor decomposition algorithms to meticulously analyze the weights of pre-trained DNNs and distinguish between backdoored and clean models.
This advancement enhances the security of deep learning and AI in networked systems, providing essential cybersecurity against evolving threats in emerging technologies.
arXiv Detail & Related papers (2024-03-13T03:10:11Z) - Expediting Neural Network Verification via Network Reduction [4.8621567234713305]
We propose a network reduction technique as a pre-processing method prior to verification.
The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network.
We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools.
arXiv Detail & Related papers (2023-08-07T06:23:24Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Neural Network Verification using Residual Reasoning [0.0]
We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
arXiv Detail & Related papers (2022-08-05T10:39:04Z) - Adversarial Robustness of Deep Neural Networks: A Survey from a Formal
Verification Perspective [7.821877331499578]
Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in security and machine learning.
We survey existing literature in adversarial robustness verification for neural networks and collect 39 diversified research works across machine learning, security, and software engineering domains.
We provide a taxonomy from a formal verification perspective for a comprehensive understanding of this topic.
arXiv Detail & Related papers (2022-06-24T11:53:12Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.