DeepAbstract: Neural Network Abstraction for Accelerating Verification
- URL: http://arxiv.org/abs/2006.13735v1
- Date: Wed, 24 Jun 2020 13:51:03 GMT
- Title: DeepAbstract: Neural Network Abstraction for Accelerating Verification
- Authors: Pranav Ashok and Vahid Hashemi and Jan K\v{r}et\'insk\'y and Stefanie
Mohr
- Abstract summary: We introduce an abstraction framework applicable to fully-connected feed-forward neural networks based on clustering of neurons that behave similarly on some inputs.
We show how the abstraction reduces the size of the network, while preserving its accuracy, and how verification results on the abstract network can be transferred back to the original network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While abstraction is a classic tool of verification to scale it up, it is not
used very often for verifying neural networks. However, it can help with the
still open task of scaling existing algorithms to state-of-the-art network
architectures. We introduce an abstraction framework applicable to
fully-connected feed-forward neural networks based on clustering of neurons
that behave similarly on some inputs. For the particular case of ReLU, we
additionally provide error bounds incurred by the abstraction. We show how the
abstraction reduces the size of the network, while preserving its accuracy, and
how verification results on the abstract network can be transferred back to the
original network.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Fully Automatic Neural Network Reduction for Formal Verification [8.017543518311196]
We introduce a fully automatic and sound reduction of neural networks using reachability analysis.
The soundness ensures that the verification of the reduced network entails the verification of the original network.
We show that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation.
arXiv Detail & Related papers (2023-05-03T07:13:47Z) - Towards Global Neural Network Abstractions with Locally-Exact
Reconstruction [2.1915057426589746]
We propose Global Interval Neural Network Abstractions with Center-Exact Reconstruction (GINNACER)
Our novel abstraction technique produces sound over-approximation bounds over the whole input domain while guaranteeing exact reconstructions for any given local input.
Our experiments show that GINNACER is several orders of magnitude tighter than state-of-the-art global abstraction techniques, while being competitive with local ones.
arXiv Detail & Related papers (2022-10-21T15:48:22Z) - Neural Network Verification using Residual Reasoning [0.0]
We present an enhancement to abstraction-based verification of neural networks, by using emphresidual reasoning.
In essence, the method allows the verifier to store information about parts of the search space in which the refined network is guaranteed to behave correctly.
arXiv Detail & Related papers (2022-08-05T10:39:04Z) - An Abstraction-Refinement Approach to Verifying Convolutional Neural
Networks [0.0]
We present the Cnn-Abs framework, which is aimed at the verification of convolutional networks.
The core of Cnn-Abs is an abstraction-refinement technique, which simplifies the verification problem.
Cnn-Abs can significantly boost the performance of a state-of-the-art verification engine, reducing runtime by 15.7% on average.
arXiv Detail & Related papers (2022-01-06T08:57:43Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Implicit recurrent networks: A novel approach to stationary input
processing with recurrent neural networks in deep learning [0.0]
In this work, we introduce and test a novel implementation of recurrent neural networks into deep learning.
We provide an algorithm which implements the backpropagation algorithm on a implicit implementation of recurrent networks.
A single-layer implicit recurrent network is able to solve the XOR problem, while a feed-forward network with monotonically increasing activation function fails at this task.
arXiv Detail & Related papers (2020-10-20T18:55:32Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.