Complete Verification via Multi-Neuron Relaxation Guided
Branch-and-Bound
- URL: http://arxiv.org/abs/2205.00263v1
- Date: Sat, 30 Apr 2022 13:12:33 GMT
- Title: Complete Verification via Multi-Neuron Relaxation Guided
Branch-and-Bound
- Authors: Claudio Ferrari, Mark Niklas Muller, Nikola Jovanovic, Martin Vechev
- Abstract summary: We present a novel complete verifier which combines the strengths of both paradigms.
It uses multi-neuron relaxations to drastically reduce the number of subproblems generated during the BaB process.
An extensive evaluation demonstrates that our verifier achieves a new state-of-the-art on both established benchmarks as well as networks with significantly higher accuracy than previously considered.
- Score: 2.896192909215469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art neural network verifiers are fundamentally based on one of
two paradigms: either encoding the whole verification problem via tight
multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure
leveraging imprecise but fast bounding methods on a large number of easier
subproblems. The former can capture complex multi-neuron dependencies but
sacrifices completeness due to the inherent limitations of convex relaxations.
The latter enables complete verification but becomes increasingly ineffective
on larger and more challenging networks. In this work, we present a novel
complete verifier which combines the strengths of both paradigms: it leverages
multi-neuron relaxations to drastically reduce the number of subproblems
generated during the BaB process and an efficient GPU-based dual optimizer to
solve the remaining ones. An extensive evaluation demonstrates that our
verifier achieves a new state-of-the-art on both established benchmarks as well
as networks with significantly higher accuracy than previously considered. The
latter result (up to 28% certification gains) indicates meaningful progress
towards creating verifiers that can handle practically relevant networks.
Related papers
- DualApp: Tight Over-Approximation for Neural Network Robustness
Verification via Under-Approximation [17.924507519230424]
We propose a novel under-approximation-guided approach to define tight over-approximations and two complementary under-approximation algorithms.
The overestimated domain guarantees the soundness while the underestimated one guides the tightness.
We implement our approach into a tool called DualApp and extensively evaluate it on a benchmark of 84 collected and trained neural networks.
arXiv Detail & Related papers (2022-11-21T05:09:34Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks [22.239149433169747]
The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
arXiv Detail & Related papers (2022-08-21T12:07:36Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Scaling the Convex Barrier with Sparse Dual Algorithms [141.4085318878354]
We present two novel dual algorithms for tight and efficient neural network bounding.
Both methods recover the strengths of the new relaxation: tightness and a linear separation oracle.
We can obtain better bounds than off-the-shelf solvers in only a fraction of their running time.
arXiv Detail & Related papers (2021-01-14T19:45:17Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.