ReluDiff: Differential Verification of Deep Neural Networks
- URL: http://arxiv.org/abs/2001.03662v2
- Date: Wed, 29 Jan 2020 21:29:59 GMT
- Title: ReluDiff: Differential Verification of Deep Neural Networks
- Authors: Brandon Paulsen, Jingbo Wang, Chao Wang
- Abstract summary: We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
- Score: 8.601847909798165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep neural networks are increasingly being deployed in practice, their
efficiency has become an important issue. While there are compression
techniques for reducing the network's size, energy consumption and
computational requirement, they only demonstrate empirically that there is no
loss of accuracy, but lack formal guarantees of the compressed network, e.g.,
in the presence of adversarial examples. Existing verification techniques such
as Reluplex, ReluVal, and DeepPoly provide formal guarantees, but they are
designed for analyzing a single network instead of the relationship between two
networks. To fill the gap, we develop a new method for differential
verification of two closely related networks. Our method consists of a fast but
approximate forward interval analysis pass followed by a backward pass that
iteratively refines the approximation until the desired property is verified.
We have two main innovations. During the forward pass, we exploit structural
and behavioral similarities of the two networks to more accurately bound the
difference between the output neurons of the two networks. Then in the backward
pass, we leverage the gradient differences to more accurately compute the most
beneficial refinement. Our experiments show that, compared to state-of-the-art
verification tools, our method can achieve orders-of-magnitude speedup and
prove many more properties than existing tools.
Related papers
- Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks [22.239149433169747]
The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
arXiv Detail & Related papers (2022-08-21T12:07:36Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - NeuroDiff: Scalable Differential Verification of Neural Networks using
Fine-Grained Approximation [18.653663583989122]
NeuroDiff is a symbolic and fine-grained approximation technique that drastically increases the accuracy of differential verification.
Our results show that NeuroDiff is up to 1000X faster and 5X more accurate than the state-of-the-art tool.
arXiv Detail & Related papers (2020-09-21T15:00:25Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.