GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy Algorithm
- URL: http://arxiv.org/abs/2501.14230v2
- Date: Wed, 08 Oct 2025 13:27:03 GMT
- Title: GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy Algorithm
- Authors: Hanrui Wang, Ching-Chun Chang, Chun-Shien Lu, Christopher Leckie, Isao Echizen,
- Abstract summary: GreedyPixel is a new adversarial attack framework for deep neural networks.<n>It combines a surrogate-derived pixel priority map with greedy, per-pixel optimization refined by query feedback.<n>Our results show that GreedyPixel bridges the precision gap between white-box and black-box attacks.
- Score: 21.84393608348216
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep neural networks are highly vulnerable to adversarial examples that inputs with small, carefully crafted perturbations that cause misclassification, making adversarial attacks an essential tool for robustness evaluation. Existing black-box attacks fall into three categories: query-only, transfer-only, and query-and-transfer, and vary in perturbation pattern and optimization strategy. However, no prior method jointly achieves query-and-transfer guidance, pixel-wise sparsity, and training-free direct optimization, leaving a gap between black-box flexibility and white-box precision. We present GreedyPixel, a new attack framework that fills this gap by combining a surrogate-derived pixel priority map with greedy, per-pixel optimization refined by query feedback. This design reduces the exponential brute-force search space to a tractable linear procedure, guarantees monotonic loss decrease and convergence to a coordinate-wise optimum, and concentrates perturbations on robust, semantically meaningful pixels to improve perceptual quality. Extensive experiments on CIFAR-10 and ImageNet under both white-box and black-box settings demonstrate that GreedyPixel achieves state-of-the-art attack success rates and produces visually imperceptible perturbations. Our results show that GreedyPixel bridges the precision gap between white-box and black-box attacks and provides a practical framework for fine-grained robustness evaluation. The implementation is available at https://github.com/azrealwang/greedypixel.
Related papers
- CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks [16.10247754923311]
Adversarial attacks such as the seminal projected gradient descent (PGD) offer an effective means to evaluate a model's robustness.
We propose CosPGD, an attack that encourages more balanced errors over the entire image domain while increasing the attack's overall efficiency.
arXiv Detail & Related papers (2023-02-04T17:59:30Z) - Scale-free Photo-realistic Adversarial Pattern Attack [20.818415741759512]
Generative Adversarial Networks (GAN) can partially address this problem by synthesizing a more semantically meaningful texture pattern.
In this paper, we propose a scale-free generation-based attack algorithm that synthesizes semantically meaningful adversarial patterns globally to images with arbitrary scales.
arXiv Detail & Related papers (2022-08-12T11:25:39Z) - Optimizing One-pixel Black-box Adversarial Attacks [0.0]
The output of Deep Neural Networks (DNN) can be altered by a small perturbation of the input in a black box setting.
This work seeks to improve the One-pixel (few-pixel) black-box adversarial attacks to reduce the number of calls to the network under attack.
arXiv Detail & Related papers (2022-04-30T12:42:14Z) - Adaptive Perturbation for Adversarial Attack [50.77612889697216]
We propose a new gradient-based attack method for adversarial examples.
We use the exact gradient direction with a scaling factor for generating adversarial perturbations.
Our method exhibits higher transferability and outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-11-27T07:57:41Z) - Adversarial examples by perturbing high-level features in intermediate
decoder layers [0.0]
Instead of perturbing pixels, we use an encoder-decoder representation of the input image and perturb intermediate layers in the decoder.
Our perturbation possesses semantic meaning, such as a longer beak or green tints.
We show that our method modifies key features such as edges and that defence techniques based on adversarial training are vulnerable to our attacks.
arXiv Detail & Related papers (2021-10-14T07:08:15Z) - Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power
of Geometric Transformations [49.06194223213629]
Black-box adversarial attacks against video classification models have been largely understudied.
In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space.
Our algorithm inherently leads to successful perturbations with surprisingly few queries.
arXiv Detail & Related papers (2021-10-05T05:05:59Z) - Self-Supervised Iterative Contextual Smoothing for Efficient Adversarial
Defense against Gray- and Black-Box Attack [24.66829920826166]
We propose a novel input transformation based adversarial defense method against gray- and black-box attack.
Our defense is free of computationally expensive adversarial training, yet, can approach its robust accuracy via input transformation.
arXiv Detail & Related papers (2021-06-22T09:51:51Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Transferable Sparse Adversarial Attack [62.134905824604104]
We introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples.
Our method achieves superior inference speed, 700$times$ faster than other optimization-based methods.
arXiv Detail & Related papers (2021-05-31T06:44:58Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - Random Transformation of Image Brightness for Adversarial Attack [5.405413975396116]
adversarial examples are crafted by adding small, human-imperceptibles to the original images.
Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptibles to the original images.
We propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method.
Our method has a higher success rate for black-box attacks than other attack methods based on data augmentation.
arXiv Detail & Related papers (2021-01-12T07:00:04Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Essential Features: Reducing the Attack Surface of Adversarial
Perturbations with Robust Content-Aware Image Preprocessing [5.831840281853604]
Adversaries can fool machine learning models into making incorrect predictions by adding perturbations to an image.
One approach to defending against such perturbations is to apply image preprocessing functions to remove the effects of the perturbation.
We propose a novel image preprocessing technique called Essential Features that transforms the image into a robust feature space.
arXiv Detail & Related papers (2020-12-03T04:40:51Z) - Pixel-Level Cycle Association: A New Perspective for Domain Adaptive
Semantic Segmentation [169.82760468633236]
We propose to build the pixel-level cycle association between source and target pixel pairs.
Our method can be trained end-to-end in one stage and introduces no additional parameters.
arXiv Detail & Related papers (2020-10-31T00:11:36Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Projection & Probability-Driven Black-Box Attack [205.9923346080908]
Existing black-box attacks suffer from the need for excessive queries in the high-dimensional space.
We propose Projection & Probability-driven Black-box Attack (PPBA) to tackle this problem.
Our method requires at most 24% fewer queries with a higher attack success rate compared with state-of-the-art approaches.
arXiv Detail & Related papers (2020-05-08T03:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.