TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation
- URL: http://arxiv.org/abs/2312.02207v1
- Date: Sun, 3 Dec 2023 00:48:33 GMT
- Title: TranSegPGD: Improving Transferability of Adversarial Examples on
Semantic Segmentation
- Authors: Xiaojun Jia, Jindong Gu, Yihao Huang, Simeng Qin, Qing Guo, Yang Liu,
Xiaochun Cao
- Abstract summary: We propose an effective two-stage adversarial attack strategy to improve the transferability of adversarial examples on semantic segmentation.
The proposed adversarial attack method can achieve state-of-the-art performance.
- Score: 62.954089681629206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferability of adversarial examples on image classification has been
systematically explored, which generates adversarial examples in black-box
mode. However, the transferability of adversarial examples on semantic
segmentation has been largely overlooked. In this paper, we propose an
effective two-stage adversarial attack strategy to improve the transferability
of adversarial examples on semantic segmentation, dubbed TranSegPGD.
Specifically, at the first stage, every pixel in an input image is divided into
different branches based on its adversarial property. Different branches are
assigned different weights for optimization to improve the adversarial
performance of all pixels.We assign high weights to the loss of the
hard-to-attack pixels to misclassify all pixels. At the second stage, the
pixels are divided into different branches based on their transferable property
which is dependent on Kullback-Leibler divergence. Different branches are
assigned different weights for optimization to improve the transferability of
the adversarial examples. We assign high weights to the loss of the
high-transferability pixels to improve the transferability of adversarial
examples. Extensive experiments with various segmentation models are conducted
on PASCAL VOC 2012 and Cityscapes datasets to demonstrate the effectiveness of
the proposed method. The proposed adversarial attack method can achieve
state-of-the-art performance.
Related papers
- Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack [51.16384207202798]
Vision-language pre-training models are vulnerable to multimodal adversarial examples (AEs)
Previous approaches augment image-text pairs to enhance diversity within the adversarial example generation process.
We propose sampling from adversarial evolution triangles composed of clean, historical, and current adversarial examples to enhance adversarial diversity.
arXiv Detail & Related papers (2024-11-04T23:07:51Z) - Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - OT-Attack: Enhancing Adversarial Transferability of Vision-Language
Models via Optimal Transport Optimization [65.57380193070574]
Vision-language pre-training models are vulnerable to multi-modal adversarial examples.
Recent works have indicated that leveraging data augmentation and image-text modal interactions can enhance the transferability of adversarial examples.
We propose an Optimal Transport-based Adversarial Attack, dubbed OT-Attack.
arXiv Detail & Related papers (2023-12-07T16:16:50Z) - Structure Invariant Transformation for better Adversarial
Transferability [9.272426833639615]
We propose a novel input transformation based attack, called Structure Invariant Attack (SIA)
SIA applies a random image transformation onto each image block to craft a set of diverse images for gradient calculation.
Experiments on the standard ImageNet dataset demonstrate that SIA exhibits much better transferability than the existing SOTA input transformation based attacks.
arXiv Detail & Related papers (2023-09-26T06:31:32Z) - Improving the Transferability of Adversarial Examples with Arbitrary
Style Transfer [32.644062141738246]
A style transfer network can alter the distribution of low-level visual features in an image while preserving semantic content for humans.
We propose a novel attack method named Style Transfer Method (STM) that utilizes a proposed arbitrary style transfer network to transform the images into different domains.
Our proposed method can significantly improve the adversarial transferability on either normally trained models or adversarially trained models.
arXiv Detail & Related papers (2023-08-21T09:58:13Z) - Adaptive Image Transformations for Transfer-based Adversarial Attack [73.74904401540743]
We propose a novel architecture, called Adaptive Image Transformation Learner (AITL)
Our elaborately designed learner adaptively selects the most effective combination of image transformations specific to the input image.
Our method significantly improves the attack success rates on both normally trained models and defense models under various settings.
arXiv Detail & Related papers (2021-11-27T08:15:44Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Understanding Adversarial Examples from the Mutual Influence of Images
and Perturbations [83.60161052867534]
We analyze adversarial examples by disentangling the clean images and adversarial perturbations, and analyze their influence on each other.
Our results suggest a new perspective towards the relationship between images and universal perturbations.
We are the first to achieve the challenging task of a targeted universal attack without utilizing original training data.
arXiv Detail & Related papers (2020-07-13T05:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.