Accelerating Robustness Verification of Deep Neural Networks Guided by
Target Labels
- URL: http://arxiv.org/abs/2007.08520v2
- Date: Mon, 27 Jul 2020 00:04:38 GMT
- Title: Accelerating Robustness Verification of Deep Neural Networks Guided by
Target Labels
- Authors: Wenjie Wan, Zhaodi Zhang, Yiwei Zhu, Min Zhang, Fu Song
- Abstract summary: Deep Neural Networks (DNNs) have become key components of many safety-critical applications such as autonomous driving and medical diagnosis.
DNNs suffer from poor robustness because of their susceptibility to adversarial examples such that small perturbations to an input result in misprediction.
We propose a novel approach that can accelerate the robustness verification techniques by guiding the verification with target labels.
- Score: 8.9960048245668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have become key components of many
safety-critical applications such as autonomous driving and medical diagnosis.
However, DNNs have been shown suffering from poor robustness because of their
susceptibility to adversarial examples such that small perturbations to an
input result in misprediction. Addressing to this concern, various approaches
have been proposed to formally verify the robustness of DNNs. Most of these
approaches reduce the verification problem to optimization problems of
searching an adversarial example for a given input so that it is not correctly
classified to the original label. However, they are limited in accuracy and
scalability. In this paper, we propose a novel approach that can accelerate the
robustness verification techniques by guiding the verification with target
labels. The key insight of our approach is that the robustness verification
problem of DNNs can be solved by verifying sub-problems of DNNs, one per target
label. Fixing the target label during verification can drastically reduce the
search space and thus improve the efficiency. We also propose an approach by
leveraging symbolic interval propagation and linear relaxation techniques to
sort the target labels in terms of chances that adversarial examples exist.
This often allows us to quickly falsify the robustness of DNNs and the
verification for remaining target labels could be avoided. Our approach is
orthogonal to, and can be integrated with, many existing verification
techniques. For evaluation purposes, we integrate it with three recent
promising DNN verification tools, i.e., MipVerify, DeepZ, and Neurify.
Experimental results show that our approach can significantly improve these
tools by 36X speedup when the perturbation distance is set in a reasonable
range.
Related papers
- Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Dynamics-Aware Loss for Learning with Label Noise [73.75129479936302]
Label noise poses a serious threat to deep neural networks (DNNs)
We propose a dynamics-aware loss (DAL) to solve this problem.
Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method.
arXiv Detail & Related papers (2023-03-21T03:05:21Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation [70.75100533512021]
In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects.
We propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables.
The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors.
arXiv Detail & Related papers (2022-07-06T06:26:17Z) - Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks [9.85360493553261]
Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
arXiv Detail & Related papers (2022-07-02T07:04:20Z) - Neural Network Verification with Proof Production [7.898605407936655]
We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
arXiv Detail & Related papers (2022-06-01T14:14:37Z) - Verification-Aided Deep Ensemble Selection [4.290931412096984]
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks.
Even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN.
This paper devises a methodology for identifying ensemble compositions that are less prone to simultaneous errors.
arXiv Detail & Related papers (2022-02-08T14:36:29Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.