DeepCert: Verification of Contextually Relevant Robustness for Neural
Network Image Classifiers
- URL: http://arxiv.org/abs/2103.01629v1
- Date: Tue, 2 Mar 2021 10:41:16 GMT
- Title: DeepCert: Verification of Contextually Relevant Robustness for Neural
Network Image Classifiers
- Authors: Colin Paterson, Haoze Wu, John Grese, Radu Calinescu, Corina S.
Pasareanu and Clark Barrett
- Abstract summary: We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations.
- Score: 16.893762648621266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce DeepCert, a tool-supported method for verifying the robustness
of deep neural network (DNN) image classifiers to contextually relevant
perturbations such as blur, haze, and changes in image contrast. While the
robustness of DNN classifiers has been the subject of intense research in
recent years, the solutions delivered by this research focus on verifying DNN
robustness to small perturbations in the images being classified, with
perturbation magnitude measured using established Lp norms. This is useful for
identifying potential adversarial attacks on DNN image classifiers, but cannot
verify DNN robustness to contextually relevant image perturbations, which are
typically not small when expressed with Lp norms. DeepCert addresses this
underexplored verification problem by supporting:(1) the encoding of real-world
image perturbations; (2) the systematic evaluation of contextually relevant DNN
robustness, using both testing and formal verification; (3) the generation of
contextually relevant counterexamples; and, through these, (4) the selection of
DNN image classifiers suitable for the operational context (i)envisaged when a
potentially safety-critical system is designed, or (ii)observed by a deployed
system. We demonstrate the effectiveness of DeepCert by showing how it can be
used to verify the robustness of DNN image classifiers build for two benchmark
datasets (`German Traffic Sign' and `CIFAR-10') to multiple contextually
relevant perturbations.
Related papers
- Data-driven Verification of DNNs for Object Recognition [0.20482269513546453]
The paper proposes a new testing approach for Deep Neural Networks (DNN) using gradient-free optimization to find perturbation chains that successfully falsify the tested DNN.
Applying it to an image segmentation task of detecting railway tracks in images, we demonstrate that the approach can successfully identify weaknesses of the tested DNN.
arXiv Detail & Related papers (2024-07-17T11:30:02Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Verification-Aided Deep Ensemble Selection [4.290931412096984]
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks.
Even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN.
This paper devises a methodology for identifying ensemble compositions that are less prone to simultaneous errors.
arXiv Detail & Related papers (2022-02-08T14:36:29Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z) - A Simple Framework to Quantify Different Types of Uncertainty in Deep
Neural Networks for Image Classification [0.0]
Quantifying uncertainty in a model's predictions is important as it enables the safety of an AI system to be increased.
This is crucial for applications where the cost of an error is high, such as in autonomous vehicle control, medical image analysis, financial estimations or legal fields.
We propose a complete framework to capture and quantify three known types of uncertainty in Deep Neural Networks for the task of image classification.
arXiv Detail & Related papers (2020-11-17T15:36:42Z) - Background Adaptive Faster R-CNN for Semi-Supervised Convolutional
Object Detection of Threats in X-Ray Images [64.39996451133268]
We present a semi-supervised approach for threat recognition which we call Background Adaptive Faster R-CNN.
This approach is a training method for two-stage object detectors which uses Domain Adaptation methods from the field of deep learning.
Two domain discriminators, one for discriminating object proposals and one for image features, are adversarially trained to prevent encoding domain-specific information.
This can reduce threat detection false alarm rates by matching the statistics of extracted features from hand-collected backgrounds to real world data.
arXiv Detail & Related papers (2020-10-02T21:05:13Z) - Defending Adversarial Examples via DNN Bottleneck Reinforcement [20.08619981108837]
This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
arXiv Detail & Related papers (2020-08-12T11:02:01Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Towards Robust Classification with Image Quality Assessment [0.9213700601337386]
Deep convolutional neural networks (DCNN) are vulnerable to adversarial examples and sensitive to perceptual quality as well as the acquisition condition of images.
In this paper, we investigate the connection between adversarial manipulation and image quality, then propose a protective mechanism.
Our method combines image quality assessment with knowledge distillation to detect input images that would trigger a DCCN to produce egregiously wrong results.
arXiv Detail & Related papers (2020-04-14T03:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.