Boosting the Transferability of Adversarial Examples via Local Mixup and
Adaptive Step Size
- URL: http://arxiv.org/abs/2401.13205v1
- Date: Wed, 24 Jan 2024 03:26:34 GMT
- Title: Boosting the Transferability of Adversarial Examples via Local Mixup and
Adaptive Step Size
- Authors: Junlin Liu and Xinchen Lyu
- Abstract summary: Adversarial examples are one critical security threat to various visual applications, where injected human-imperceptible perturbations can confuse the output.
Existing input-diversity-based methods adopt different image transformations, but may be inefficient due to insufficient input diversity and an identical perturbation step size.
This paper proposes a black-box adversarial generative framework by jointly designing enhanced input diversity and adaptive step sizes.
- Score: 5.04766995613269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are one critical security threat to various visual
applications, where injected human-imperceptible perturbations can confuse the
output.Generating transferable adversarial examples in the black-box setting is
crucial but challenging in practice. Existing input-diversity-based methods
adopt different image transformations, but may be inefficient due to
insufficient input diversity and an identical perturbation step size. Motivated
by the fact that different image regions have distinctive weights in
classification, this paper proposes a black-box adversarial generative
framework by jointly designing enhanced input diversity and adaptive step
sizes. We design local mixup to randomly mix a group of transformed adversarial
images, strengthening the input diversity. For precise adversarial generation,
we project the perturbation into the $tanh$ space to relax the boundary
constraint. Moreover, the step sizes of different regions can be dynamically
adjusted by integrating a second-order momentum.Extensive experiments on
ImageNet validate that our framework can achieve superior transferability
compared to state-of-the-art baselines.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation [62.954089681629206]
We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-12-03T00:48:33Z) - Rethinking Mixup for Improving the Adversarial Transferability [6.2867306093287905]
We propose a new input transformation-based attack called Mixing the Image but Separating the gradienT (MIST)
MIST randomly mixes the input image with a randomly shifted image and separates the gradient of each loss item for each mixed image.
Experiments on the ImageNet dataset demonstrate that MIST outperforms existing SOTA input transformation-based attacks.
arXiv Detail & Related papers (2023-11-28T03:10:44Z) - Structure Invariant Transformation for better Adversarial
Transferability [9.272426833639615]
We propose a novel input transformation based attack, called Structure Invariant Attack (SIA)
SIA applies a random image transformation onto each image block to craft a set of diverse images for gradient calculation.
Experiments on the standard ImageNet dataset demonstrate that SIA exhibits much better transferability than the existing SOTA input transformation based attacks.
arXiv Detail & Related papers (2023-09-26T06:31:32Z) - Improving the Transferability of Adversarial Examples with Arbitrary
Style Transfer [32.644062141738246]
A style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans.
We propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains.
Our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models.
arXiv Detail & Related papers (2023-08-21T09:58:13Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - Adaptive Image Transformations for Transfer-based Adversarial Attack [73.74904401540743]
We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
arXiv Detail & Related papers (2021-11-27T08:15:44Z) - Exploring Transferable and Robust Adversarial Perturbation Generation
from the Perspective of Network Hierarchy [52.153866313879924]
The transferability and robustness of adversarial examples are two practical yet important properties for black-box adversarial attacks.
We propose a transferable and robust adversarial generation (TRAP) method.
Our TRAP achieves impressive transferability and high robustness against certain interferences.
arXiv Detail & Related papers (2021-08-16T11:52:41Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Interlayer and Intralayer Scale Aggregation for Scale-invariant Crowd
Counting [19.42355176075503]
Single-column Scale-invariant Network (ScSiNet) is presented in this paper.
It extracts sophisticated scale-invariant features via the combination of interlayer multi-scale integration and a novel intralayer scale-invariant transformation (SiT)
Experiments on public datasets demonstrate that the proposed method consistently outperforms state-of-the-art approaches in counting accuracy and scale-invariant property.
arXiv Detail & Related papers (2020-05-25T06:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.