Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs
- URL: http://arxiv.org/abs/2006.09040v2
- Date: Tue, 2 Feb 2021 16:53:29 GMT
- Title: Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs
- Authors: Christopher Brix, Thomas Noll
- Abstract summary: Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
- Score: 2.1320960069210484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are commonly used in safety-critical real-world applications.
Unfortunately, the predicted output is often highly sensitive to small, and
possibly imperceptible, changes to the input data. Proving that either no such
adversarial examples exist, or providing a concrete instance, is therefore
crucial to ensure safe applications. As enumerating and testing all potential
adversarial examples is computationally infeasible, verification techniques
have been developed to provide mathematically sound proofs of their absence
using overestimations of the network activations. We propose an improved
technique for computing tight upper and lower bounds of these node values,
based on increased flexibility gained by computing both bounds independently of
each other. Furthermore, we gain an additional improvement by re-implementing
part of the original state-of-the-art software "Neurify", leading to a faster
analysis. Combined, these adaptations reduce the necessary runtime by up to
94%, and allow a successful search for networks and inputs that were previously
too complex. We provide proofs for tight upper and lower bounds on max-pooling
layers in convolutional networks. To ensure widespread usability, we open
source our implementation "Debona", featuring both the implementation specific
enhancements as well as the refined boundary computation for faster and more
exact~results.
Related papers
- Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks [22.239149433169747]
The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
arXiv Detail & Related papers (2022-08-21T12:07:36Z) - On Optimizing Back-Substitution Methods for Neural Network Verification [1.4394939014120451]
We present an approach for making back-substitution produce tighter bounds.
Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques.
arXiv Detail & Related papers (2022-08-16T11:16:44Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Complete Verification via Multi-Neuron Relaxation Guided
Branch-and-Bound [2.896192909215469]
We present a novel complete verifier which combines the strengths of both paradigms.
It uses multi-neuron relaxations to drastically reduce the number of subproblems generated during the BaB process.
An extensive evaluation demonstrates that our verifier achieves a new state-of-the-art on both established benchmarks as well as networks with significantly higher accuracy than previously considered.
arXiv Detail & Related papers (2022-04-30T13:12:33Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - PEREGRiNN: Penalized-Relaxation Greedy Neural Network Verifier [1.1011268090482575]
We introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs.
We use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions.
arXiv Detail & Related papers (2020-06-18T21:33:07Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.