Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
- URL: http://arxiv.org/abs/2002.03500v3
- Date: Mon, 9 Nov 2020 05:52:03 GMT
- Title: Watch out! Motion is Blurring the Vision of Your Deep Neural Networks
- Authors: Qing Guo and Felix Juefei-Xu and Xiaofei Xie and Lei Ma and Jian Wang
and Bing Yu and Wei Feng and Yang Liu
- Abstract summary: State-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations.
We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples.
A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA.
- Score: 34.51270823371404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The state-of-the-art deep neural networks (DNNs) are vulnerable against
adversarial examples with additive random-like noise perturbations. While such
examples are hardly found in the physical world, the image blurring effect
caused by object motion, on the other hand, commonly occurs in practice, making
the study of which greatly important especially for the widely adopted
real-time image processing tasks (e.g., object detection, tracking). In this
paper, we initiate the first step to comprehensively investigate the potential
hazards of the blur effect for DNN, caused by object motion. We propose a novel
adversarial attack method that can generate visually natural motion-blurred
adversarial examples, named motion-based adversarial blur attack (ABBA). To
this end, we first formulate the kernel-prediction-based attack where an input
image is convolved with kernels in a pixel-wise way, and the misclassification
capability is achieved by tuning the kernel weights. To generate visually more
natural and plausible examples, we further propose the saliency-regularized
adversarial kernel prediction, where the salient region serves as a moving
object, and the predicted kernel is regularized to achieve naturally visual
effects. Besides, the attack is further enhanced by adaptively tuning the
translations of object and background. A comprehensive evaluation on the
NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of
ABBA by considering various kernel sizes, translations, and regions. The
in-depth study further confirms that our method shows more effective
penetrating capability to the state-of-the-art GAN-based deblurring mechanisms
compared with other blurring methods. We release the code to
https://github.com/tsingqguo/ABBA.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial
Attacks [21.86821880164293]
adversarial attacks can easily mislead a neural network and lead to wrong decisions.
In this paper, we use the gradient class activation map (GradCAM) to analyze the behavior deviation of the VGG-16 network.
We also propose a novel NoiseCAM algorithm that integrates information from globally and pixel-level weighted class activation maps.
arXiv Detail & Related papers (2023-03-09T22:07:41Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - Blurring Fools the Network -- Adversarial Attacks by Feature Peak
Suppression and Gaussian Blurring [7.540176446791261]
We propose an adversarial attack demo named peak suppression (PS) by suppressing the values of peak elements in the features of the data.
Experiment results show that PS and well-designed gaussian blurring can form adversarial attacks that completely change classification results of a well-trained target network.
arXiv Detail & Related papers (2020-12-21T15:47:14Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.