Verification of Deep Convolutional Neural Networks Using ImageStars
- URL: http://arxiv.org/abs/2004.05511v2
- Date: Thu, 14 May 2020 20:02:06 GMT
- Title: Verification of Deep Convolutional Neural Networks Using ImageStars
- Authors: Hoang-Dung Tran, Stanley Bak, Weiming Xiang and Taylor T.Johnson
- Abstract summary: Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications.
CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output.
We describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
- Score: 10.44732293654293
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Convolutional Neural Networks (CNN) have redefined the state-of-the-art in
many real-world applications, such as facial recognition, image classification,
human pose estimation, and semantic segmentation. Despite their success, CNNs
are vulnerable to adversarial attacks, where slight changes to their inputs may
lead to sharp changes in their output in even well-trained networks. Set-based
analysis methods can detect or prove the absence of bounded adversarial
attacks, which can then be used to evaluate the effectiveness of neural network
training methodology. Unfortunately, existing verification approaches have
limited scalability in terms of the size of networks that can be analyzed.
In this paper, we describe a set-based framework that successfully deals with
real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
Our approach is based on a new set representation called the ImageStar, which
enables efficient exact and over-approximative analysis of CNNs. ImageStars
perform efficient set-based analysis by combining operations on concrete images
with linear programming (LP). Our approach is implemented in a tool called NNV,
and can verify the robustness of VGG networks with respect to a small set of
input states, derived from adversarial attacks, such as the DeepFool attack.
The experimental results show that our approach is less conservative and faster
than existing zonotope methods, such as those used in DeepZ, and the polytope
method used in DeepPoly.
Related papers
- Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - Unfolding Local Growth Rate Estimates for (Almost) Perfect Adversarial
Detection [22.99930028876662]
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks.
Current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system.
We propose a simple and light-weight detector, which leverages recent findings on the relation between networks' local intrinsic dimensionality (LID) and adversarial attacks.
arXiv Detail & Related papers (2022-12-13T17:51:32Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - Blind Image Inpainting with Sparse Directional Filter Dictionaries for
Lightweight CNNs [4.020698631876855]
We present a novel strategy to learn convolutional kernels that applies a filter dictionary whose elements are linearly combined with trainable weights.
Our results show not only an improved inpainting quality compared to conventional CNNs but also significantly faster network convergence within a lightweight network design.
arXiv Detail & Related papers (2022-05-13T12:44:44Z) - Weakly-supervised fire segmentation by visualizing intermediate CNN
layers [82.75113406937194]
Fire localization in images and videos is an important step for an autonomous system to combat fire incidents.
We consider weakly supervised segmentation of fire in images, in which only image labels are used to train the network.
We show that in the case of fire segmentation, which is a binary segmentation problem, the mean value of features in a mid-layer of classification CNN can perform better than conventional Class Activation Mapping (CAM) method.
arXiv Detail & Related papers (2021-11-16T11:56:28Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Detecting Adversarial Examples by Input Transformations, Defense
Perturbations, and Voting [71.57324258813674]
convolutional neural networks (CNNs) have proved to reach super-human performance in visual recognition tasks.
CNNs can easily be fooled by adversarial examples, i.e., maliciously-crafted images that force the networks to predict an incorrect output.
This paper extensively explores the detection of adversarial examples via image transformations and proposes a novel methodology.
arXiv Detail & Related papers (2021-01-27T14:50:41Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.