Attention-Guided Black-box Adversarial Attacks with Large-Scale
Multiobjective Evolutionary Optimization
- URL: http://arxiv.org/abs/2101.07512v1
- Date: Tue, 19 Jan 2021 08:48:44 GMT
- Title: Attention-Guided Black-box Adversarial Attacks with Large-Scale
Multiobjective Evolutionary Optimization
- Authors: Jie Wang, Zhaoxia Yin, Jing Jiang, and Yang Du
- Abstract summary: We propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization.
By considering the spatial semantic information of images, we firstly take advantage of the attention map to determine the perturbed pixels.
Instead of attacking the entire image, reducing the perturbed pixels with the attention mechanism can help to avoid the notorious curse of dimensionality.
- Score: 16.096277139911013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fooling deep neural networks (DNNs) with the black-box optimization has
become a popular adversarial attack fashion, as the structural prior knowledge
of DNNs is always unknown. Nevertheless, recent black-box adversarial attacks
may struggle to balance their attack ability and visual quality of the
generated adversarial examples (AEs) in tackling high-resolution images. In
this paper, we propose an attention-guided black-box adversarial attack based
on the large-scale multiobjective evolutionary optimization, termed as LMOA. By
considering the spatial semantic information of images, we firstly take
advantage of the attention map to determine the perturbed pixels. Instead of
attacking the entire image, reducing the perturbed pixels with the attention
mechanism can help to avoid the notorious curse of dimensionality and thereby
improves the performance of attacking. Secondly, a large-scale multiobjective
evolutionary algorithm is employed to traverse the reduced pixels in the
salient region. Benefiting from its characteristics, the generated AEs have the
potential to fool target DNNs while being imperceptible by the human vision.
Extensive experimental results have verified the effectiveness of the proposed
LMOA on the ImageNet dataset. More importantly, it is more competitive to
generate high-resolution AEs with better visual quality compared with the
existing black-box adversarial attacks.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization [13.045125782574306]
This paper presents a novel adversarial attack strategy, AICAttack, designed to attack image captioning models through subtle perturbations on images.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets against multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - DI-AA: An Interpretable White-box Attack for Fooling Deep Neural
Networks [6.704751710867746]
White-box Adversarial Example (AE) attacks towards Deep Neural Networks (DNNs) have a more powerful destructive capacity than black-box AE attacks.
We propose an interpretable white-box AE attack approach, DI-AA, which explores the application of the interpretable approach of the deep Taylor decomposition.
arXiv Detail & Related papers (2021-10-14T12:15:58Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers [82.19722134082645]
A stealthy projector-based adversarial attack is proposed in this paper.
We approximate the real project-and-capture operation using a deep neural network named PCNet.
Our experiments show that the proposed SPAA clearly outperforms other methods by achieving higher attack success rates.
arXiv Detail & Related papers (2020-12-10T18:14:03Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - Watch out! Motion is Blurring the Vision of Your Deep Neural Networks [34.51270823371404]
State-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations.
We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples.
A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA.
arXiv Detail & Related papers (2020-02-10T02:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.