Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding
- URL: http://arxiv.org/abs/2203.14141v1
- Date: Sat, 26 Mar 2022 19:23:37 GMT
- Title: Efficient Global Robustness Certification of Neural Networks via
Interleaving Twin-Network Encoding
- Authors: Zhilu Wang, Chao Huang, Qi Zhu
- Abstract summary: We formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem.
Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side.
A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach.
- Score: 8.173681464694651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robustness of deep neural networks has received significant interest
recently, especially when being deployed in safety-critical systems, as it is
important to analyze how sensitive the model output is under input
perturbations. While most previous works focused on the local robustness
property around an input sample, the studies of the global robustness property,
which bounds the maximum output change under perturbations over the entire
input space, are still lacking. In this work, we formulate the global
robustness certification for neural networks with ReLU activation functions as
a mixed-integer linear programming (MILP) problem, and present an efficient
approach to address it. Our approach includes a novel interleaving twin-network
encoding scheme, where two copies of the neural network are encoded
side-by-side with extra interleaving dependencies added between them, and an
over-approximation algorithm leveraging relaxation and refinement techniques to
reduce complexity. Experiments demonstrate the timing efficiency of our work
when compared with previous global robustness certification methods and the
tightness of our over-approximation. A case study of closed-loop control safety
verification is conducted, and demonstrates the importance and practicality of
our approach for certifying the global robustness of neural networks in
safety-critical systems.
Related papers
- Learning-Based Verification of Stochastic Dynamical Systems with Neural Network Policies [7.9898826915621965]
We use a verification procedure that trains another neural network, which acts as a certificate proving that the policy satisfies the task.
For reach-avoid tasks, it suffices to show that this certificate network is a reach-avoid supermartingale (RASM)
arXiv Detail & Related papers (2024-06-02T18:19:19Z) - Certifying Global Robustness for Deep Neural Networks [3.8556106468003613]
A globally deep neural network resists perturbations on all meaningful inputs.
Current robustness certification methods emphasize local robustness, struggling to scale and generalize.
This paper presents a systematic and efficient method to evaluate and verify global robustness for deep neural networks.
arXiv Detail & Related papers (2024-05-31T00:46:04Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - A Tool for Neural Network Global Robustness Certification and Training [12.349979558107496]
A certified globally robust network can ensure its robustness on any possible network input.
The state-of-the-art global robustness certification algorithm can only certify networks with at most several thousand neurons.
We propose the GPU-supported global robustness certification framework GROCET, which is more efficient than the previous optimization-based certification approach.
arXiv Detail & Related papers (2022-08-15T15:58:16Z) - On the Robustness and Anomaly Detection of Sparse Neural Networks [28.832060124537843]
We show that sparsity can make networks more robust and better anomaly detectors.
We also show that structured sparsity greatly helps in reducing the complexity of expensive robustness and detection methods.
We introduce a new method, SensNorm, which uses the sensitivity of weights derived from an appropriate pruning method to detect anomalous samples.
arXiv Detail & Related papers (2022-07-09T09:03:52Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Global Robustness Verification Networks [33.52550848953545]
We develop a global robustness verification framework with three components.
New network architecture Sliding Door Network (SDN) enabling feasible rule-based back-propagation''
We demonstrate the effectiveness of our approach on both synthetic and real datasets.
arXiv Detail & Related papers (2020-06-08T08:09:20Z) - BiDet: An Efficient Binarized Object Detector [96.19708396510894]
We propose a binarized neural network learning method called BiDet for efficient object detection.
Our BiDet fully utilizes the representational capacity of the binary neural networks for object detection by redundancy removal.
Our method outperforms the state-of-the-art binary neural networks by a sizable margin.
arXiv Detail & Related papers (2020-03-09T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.