Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks
- URL: http://arxiv.org/abs/2208.09872v1
- Date: Sun, 21 Aug 2022 12:07:36 GMT
- Title: Provably Tightest Linear Approximation for Robustness Verification of
Sigmoid-like Neural Networks
- Authors: Zhaodi Zhang, Yiting Wu, Si Liu, Jing Liu, Min Zhang
- Abstract summary: The robustness of deep neural networks is crucial to modern AI-enabled systems.
Due to their non-linearity, Sigmoid-like neural networks have been adopted in a wide range of applications.
- Score: 22.239149433169747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness of deep neural networks is crucial to modern AI-enabled
systems and should be formally verified. Sigmoid-like neural networks have been
adopted in a wide range of applications. Due to their non-linearity,
Sigmoid-like activation functions are usually over-approximated for efficient
verification, which inevitably introduces imprecision. Considerable efforts
have been devoted to finding the so-called tighter approximations to obtain
more precise verification results. However, existing tightness definitions are
heuristic and lack theoretical foundations. We conduct a thorough empirical
analysis of existing neuron-wise characterizations of tightness and reveal that
they are superior only on specific neural networks. We then introduce the
notion of network-wise tightness as a unified tightness definition and show
that computing network-wise tightness is a complex non-convex optimization
problem. We bypass the complexity from different perspectives via two
efficient, provably tightest approximations. The results demonstrate the
promising performance achievement of our approaches over state of the art: (i)
achieving up to 251.28% improvement to certified lower robustness bounds; and
(ii) exhibiting notably more precise verification results on convolutional
networks.
Related papers
- Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - DualApp: Tight Over-Approximation for Neural Network Robustness
Verification via Under-Approximation [17.924507519230424]
We propose a novel under-approximation-guided approach to define tight over-approximations and two complementary under-approximation algorithms.
The overestimated domain guarantees the soundness while the underestimated one guides the tightness.
We implement our approach into a tool called DualApp and extensively evaluate it on a benchmark of 84 collected and trained neural networks.
arXiv Detail & Related papers (2022-11-21T05:09:34Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Scalable Verification of Quantized Neural Networks (Technical Report) [14.04927063847749]
We show that bit-exact implementation of quantized neural networks with bit-vector specifications is PSPACE-hard.
We propose three techniques for making SMT-based verification of quantized neural networks more scalable.
arXiv Detail & Related papers (2020-12-15T10:05:37Z) - Analytical aspects of non-differentiable neural networks [0.0]
We discuss the expressivity of quantized neural networks and approximation techniques for non-differentiable networks.
We show that QNNs have the same expressivity as DNNs in terms of approximation of Lipschitz functions in the $Linfty$ norm.
We also consider networks defined by means of Heaviside-type activation functions, and prove for them a pointwise approximation result by means of smooth networks.
arXiv Detail & Related papers (2020-11-03T17:20:43Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Do Wider Neural Networks Really Help Adversarial Robustness? [92.8311752980399]
We show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability.
We propose a new Width Adjusted Regularization (WAR) method that adaptively enlarges $lambda$ on wide models.
arXiv Detail & Related papers (2020-10-03T04:46:17Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Debona: Decoupled Boundary Network Analysis for Tighter Bounds and
Faster Adversarial Robustness Proofs [2.1320960069210484]
Neural networks are commonly used in safety-critical real-world applications.
Proving that either no such adversarial examples exist, or providing a concrete instance, is therefore crucial to ensure safe applications.
We provide proofs for tight upper and lower bounds on max-pooling layers in convolutional networks.
arXiv Detail & Related papers (2020-06-16T10:00:33Z) - ReluDiff: Differential Verification of Deep Neural Networks [8.601847909798165]
We develop a new method for differential verification of two closely related networks.
We exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks.
Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup.
arXiv Detail & Related papers (2020-01-10T20:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.