Improving the Transferability of Adversarial Samples by Path-Augmented
Method
- URL: http://arxiv.org/abs/2303.15735v1
- Date: Tue, 28 Mar 2023 05:14:04 GMT
- Title: Improving the Transferability of Adversarial Samples by Path-Augmented
Method
- Authors: Jianping Zhang, Jen-tse Huang, Wenxuan Wang, Yichen Li, Weibin Wu,
Xiaosen Wang, Yuxin Su, Michael R. Lyu
- Abstract summary: We propose the Path-Augmented Method (PAM) to overcome the pitfall of augmenting semantics-inconsistent images.
PAM can achieve an improvement of over 4.8% on average compared with the state-of-the-art baselines in terms of the attack success rates.
- Score: 38.41363108018462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have achieved unprecedented success on diverse vision
tasks. However, they are vulnerable to adversarial noise that is imperceptible
to humans. This phenomenon negatively affects their deployment in real-world
scenarios, especially security-related ones. To evaluate the robustness of a
target model in practice, transfer-based attacks craft adversarial samples with
a local model and have attracted increasing attention from researchers due to
their high efficiency. The state-of-the-art transfer-based attacks are
generally based on data augmentation, which typically augments multiple
training images from a linear path when learning adversarial samples. However,
such methods selected the image augmentation path heuristically and may augment
images that are semantics-inconsistent with the target images, which harms the
transferability of the generated adversarial samples. To overcome the pitfall,
we propose the Path-Augmented Method (PAM). Specifically, PAM first constructs
a candidate augmentation path pool. It then settles the employed augmentation
paths during adversarial sample generation with greedy search. Furthermore, to
avoid augmenting semantics-inconsistent images, we train a Semantics Predictor
(SP) to constrain the length of the augmentation path. Extensive experiments
confirm that PAM can achieve an improvement of over 4.8% on average compared
with the state-of-the-art baselines in terms of the attack success rates.
Related papers
- PEAS: A Strategy for Crafting Transferable Adversarial Examples [2.9815109163161204]
Black box attacks pose a significant threat to machine learning systems.
Adversarial examples generated with a substitute model often suffer from limited transferability to the target model.
We propose a novel strategy called PEAS that can boost the transferability of existing black box attacks.
arXiv Detail & Related papers (2024-10-20T14:55:08Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - LFAA: Crafting Transferable Targeted Adversarial Examples with
Low-Frequency Perturbations [25.929492841042666]
We present a novel approach to generate transferable targeted adversarial examples.
We exploit the vulnerability of deep neural networks to perturbations on high-frequency components of images.
Our proposed approach significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T04:54:55Z) - Improving the Transferability of Adversarial Attacks on Face Recognition
with Beneficial Perturbation Feature Augmentation [26.032639566914114]
Face recognition (FR) models can be easily fooled by adversarial examples, which are crafted by adding imperceptible perturbations on benign face images.
In this paper, we improve the transferability of adversarial face examples to expose more blind spots of existing FR models.
We propose a novel attack method called Beneficial Perturbation Feature Augmentation Attack (BPFA)
arXiv Detail & Related papers (2022-10-28T13:25:59Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Random Transformation of Image Brightness for Adversarial Attack [5.405413975396116]
adversarial examples are crafted by adding small, human-imperceptibles to the original images.
Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptibles to the original images.
We propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method.
Our method has a higher success rate for black-box attacks than other attack methods based on data augmentation.
arXiv Detail & Related papers (2021-01-12T07:00:04Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.