Structure-Preserving Progressive Low-rank Image Completion for Defending
Adversarial Attacks
- URL: http://arxiv.org/abs/2103.02781v1
- Date: Thu, 4 Mar 2021 01:24:15 GMT
- Title: Structure-Preserving Progressive Low-rank Image Completion for Defending
Adversarial Attacks
- Authors: Zhiqun Zhao, Hengyou Wang, Hao Sun and Zhihai He
- Abstract summary: Deep neural networks recognize objects by analyzing local image details and summarizing their information along the inference layers to derive the final decision.
Small sophisticated noise in the input images can accumulate along the network inference path and produce wrong decisions at the network output.
Human eyes recognize objects based on their global structure and semantic cues, instead of local image textures.
- Score: 20.700098449823024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks recognize objects by analyzing local image details and
summarizing their information along the inference layers to derive the final
decision. Because of this, they are prone to adversarial attacks. Small
sophisticated noise in the input images can accumulate along the network
inference path and produce wrong decisions at the network output. On the other
hand, human eyes recognize objects based on their global structure and semantic
cues, instead of local image textures. Because of this, human eyes can still
clearly recognize objects from images which have been heavily damaged by
adversarial attacks. This leads to a very interesting approach for defending
deep neural networks against adversarial attacks. In this work, we propose to
develop a structure-preserving progressive low-rank image completion (SPLIC)
method to remove unneeded texture details from the input images and shift the
bias of deep neural networks towards global object structures and semantic
cues. We formulate the problem into a low-rank matrix completion problem with
progressively smoothed rank functions to avoid local minimums during the
optimization process. Our experimental results demonstrate that the proposed
method is able to successfully remove the insignificant local image details
while preserving important global object structures. On black-box, gray-box,
and white-box attacks, our method outperforms existing defense methods (by up
to 12.6%) and significantly improves the adversarial robustness of the network.
Related papers
- ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - White Box Methods for Explanations of Convolutional Neural Networks in
Image Classification Tasks [3.3959642559854357]
Convolutional Neural Networks (CNNs) have demonstrated state of the art performance for the task of image classification.
Several approaches have been proposed to explain to understand the reasoning behind a prediction made by a network.
We focus primarily on white box methods that leverage the information of the internal architecture of a network to explain its decision.
arXiv Detail & Related papers (2021-04-06T14:40:00Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Reproduction of Lateral Inhibition-Inspired Convolutional Neural Network
for Visual Attention and Saliency Detection [0.0]
neural networks can be effectively confused with even natural images examples.
I suspect that the classification of an object is strongly influenced by the background pixels on which the object is located.
I analyze the above problem using for this purpose saliency maps created by the LICNN network.
arXiv Detail & Related papers (2020-05-05T13:55:47Z) - Verification of Deep Convolutional Neural Networks Using ImageStars [10.44732293654293]
Convolutional Neural Networks (CNN) have redefined the state-of-the-art in many real-world applications.
CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output.
We describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet.
arXiv Detail & Related papers (2020-04-12T00:37:21Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.