Neural Network Branch-and-Bound for Neural Network Verification
- URL: http://arxiv.org/abs/2107.12855v1
- Date: Tue, 27 Jul 2021 14:42:57 GMT
- Title: Neural Network Branch-and-Bound for Neural Network Verification
- Authors: Florian Jaeckle and Jingyue Lu and M. Pawan Kumar
- Abstract summary: We propose a novel machine learning framework that can be used for designing an effective branching strategy.
We learn two graph neural networks (GNNs) that both directly treat the network we want to verify as a graph input.
We show that our GNN models generalize well to harder properties on larger unseen networks.
- Score: 26.609606492971967
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many available formal verification methods have been shown to be instances of
a unified Branch-and-Bound (BaB) formulation. We propose a novel machine
learning framework that can be used for designing an effective branching
strategy as well as for computing better lower bounds. Specifically, we learn
two graph neural networks (GNN) that both directly treat the network we want to
verify as a graph input and perform forward-backward passes through the GNN
layers. We use one GNN to simulate the strong branching heuristic behaviour and
another to compute a feasible dual solution of the convex relaxation, thereby
providing a valid lower bound.
We provide a new verification dataset that is more challenging than those
used in the literature, thereby providing an effective alternative for testing
algorithmic improvements for verification. Whilst using just one of the GNNs
leads to a reduction in verification time, we get optimal performance when
combining the two GNN approaches. Our combined framework achieves a 50\%
reduction in both the number of branches and the time required for verification
on various convolutional networks when compared to several state-of-the-art
verification methods. In addition, we show that our GNN models generalize well
to harder properties on larger unseen networks.
Related papers
- Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective techniques for neural network (NN) verification.
We develop a general framework, named GenBaB, to conduct BaB on general nonlinearities to verify NNs with general architectures.
We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including NNs with activation functions such as Sigmoid, Tanh, Sine and GeLU.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Pruning and Slicing Neural Networks using Formal Verification [0.2538209532048866]
Deep neural networks (DNNs) play an increasingly important role in various computer systems.
In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network's weights.
Here, we propose to address this challenge by harnessing recent advances in DNN verification.
arXiv Detail & Related papers (2021-05-28T07:53:50Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Efficient Exact Verification of Binarized Neural Networks [15.639601066641099]
Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification.
We present a new system, EEV, for efficient and exact verification of BNNs.
We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs.
arXiv Detail & Related papers (2020-05-07T16:34:30Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.