PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack
- URL: http://arxiv.org/abs/2101.07538v1
- Date: Tue, 19 Jan 2021 09:53:52 GMT
- Title: PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack
- Authors: Jie Wang, Zhaoxia Yin, Jin Tang, Jing Jiang, and Bin Luo
- Abstract summary: We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
- Score: 37.15301296824337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The studies on black-box adversarial attacks have become increasingly
prevalent due to the intractable acquisition of the structural knowledge of
deep neural networks (DNNs). However, the performance of emerging attacks is
negatively impacted when fooling DNNs tailored for high-resolution images. One
of the explanations is that these methods usually focus on attacking the entire
image, regardless of its spatial semantic information, and thereby encounter
the notorious curse of dimensionality. To this end, we propose a pixel
correlation-based attentional black-box adversarial attack, termed as PICA.
Firstly, we take only one of every two neighboring pixels in the salient region
as the target by leveraging the attentional mechanism and pixel correlation of
images, such that the dimension of the black-box attack reduces. After that, a
general multiobjective evolutionary algorithm is employed to traverse the
reduced pixels and generate perturbations that are imperceptible by the human
vision. Extensive experimental results have verified the effectiveness of the
proposed PICA on the ImageNet dataset. More importantly, PICA is
computationally more efficient to generate high-resolution adversarial examples
compared with the existing black-box attacks.
Related papers
- AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization [13.045125782574306]
This paper presents a novel adversarial attack strategy, AICAttack, designed to attack image captioning models through subtle perturbations on images.
operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information.
We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets against multiple victim models.
arXiv Detail & Related papers (2024-02-19T08:27:23Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - Attention-Guided Black-box Adversarial Attacks with Large-Scale
Multiobjective Evolutionary Optimization [16.096277139911013]
We propose an attention-guided black-box adversarial attack based on the large-scale multiobjective evolutionary optimization.
By considering the spatial semantic information of images, we firstly take advantage of the attention map to determine the perturbed pixels.
Instead of attacking the entire image, reducing the perturbed pixels with the attention mechanism can help to avoid the notorious curse of dimensionality.
arXiv Detail & Related papers (2021-01-19T08:48:44Z) - Perception Improvement for Free: Exploring Imperceptible Black-box
Adversarial Attacks on Image Classification [27.23874129994179]
White-box adversarial attacks can fool neural networks with small perturbations, especially for large size images.
Keeping successful adversarial perturbations imperceptible is especially challenging for transfer-based black-box adversarial attacks.
We propose structure-aware adversarial attacks by generating adversarial images based on psychological perceptual models.
arXiv Detail & Related papers (2020-10-30T07:17:12Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - A Black-box Adversarial Attack Strategy with Adjustable Sparsity and
Generalizability for Deep Image Classifiers [16.951363298896638]
Black-box adversarial perturbations are more practical for real-world applications.
We propose the DEceit algorithm for constructing effective universal pixel-restricted perturbations.
We find that perturbing only about 10% of the pixels in an image using DEceit achieves a commendable and highly transferable Fooling Rate.
arXiv Detail & Related papers (2020-04-24T19:42:00Z) - Watch out! Motion is Blurring the Vision of Your Deep Neural Networks [34.51270823371404]
State-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations.
We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples.
A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA.
arXiv Detail & Related papers (2020-02-10T02:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.