Boosting the Transferability of Video Adversarial Examples via Temporal
Translation
- URL: http://arxiv.org/abs/2110.09075v1
- Date: Mon, 18 Oct 2021 07:52:17 GMT
- Title: Boosting the Transferability of Video Adversarial Examples via Temporal
Translation
- Authors: Zhipeng Wei, Jingjing Chen, Zuxuan Wu, Yu-Gang Jiang
- Abstract summary: adversarial examples are transferable, which makes them feasible for black-box attacks in real-world applications.
We introduce a temporal translation attack method, which optimize the adversarial perturbations over a set of temporal translated video clips.
Experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.
- Score: 82.0745476838865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although deep-learning based video recognition models have achieved
remarkable success, they are vulnerable to adversarial examples that are
generated by adding human-imperceptible perturbations on clean video samples.
As indicated in recent studies, adversarial examples are transferable, which
makes it feasible for black-box attacks in real-world applications.
Nevertheless, most existing adversarial attack methods have poor
transferability when attacking other video models and transfer-based attacks on
video models are still unexplored. To this end, we propose to boost the
transferability of video adversarial examples for black-box attacks on video
recognition models. Through extensive analysis, we discover that different
video recognition models rely on different discriminative temporal patterns,
leading to the poor transferability of video adversarial examples. This
motivates us to introduce a temporal translation attack method, which optimizes
the adversarial perturbations over a set of temporal translated video clips. By
generating adversarial examples over translated videos, the resulting
adversarial examples are less sensitive to temporal patterns existed in the
white-box model being attacked and thus can be better transferred. Extensive
experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate
that our method can significantly boost the transferability of video
adversarial examples. For transfer-based attack against video recognition
models, it achieves a 61.56% average attack success rate on the Kinetics-400
and 48.60% on the UCF-101.
Related papers
- Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models [17.958154849014576]
Adversarial attacks can be used to assess the robustness of large visual-language models (VLMs)
Previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure.
We propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples.
arXiv Detail & Related papers (2024-04-16T07:19:52Z) - SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - Diffusion Models for Imperceptible and Transferable Adversarial Attack [23.991194050494396]
We propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models.
Our proposed method, DiffAttack, is the first that introduces diffusion models into the adversarial attack field.
arXiv Detail & Related papers (2023-05-14T16:02:36Z) - Cross-Modal Transferable Adversarial Attacks from Images to Videos [82.0745476838865]
Recent studies have shown that adversarial examples hand-crafted on one white-box model can be used to attack other black-box models.
We propose a simple yet effective cross-modal attack method, named as Image To Video (I2V) attack.
I2V generates adversarial frames by minimizing the cosine similarity between features of pre-trained image models from adversarial and benign examples.
arXiv Detail & Related papers (2021-12-10T08:19:03Z) - Direction-Aggregated Attack for Transferable Adversarial Examples [10.208465711975242]
A deep neural network is vulnerable to adversarial examples crafted by imposing imperceptible changes to the inputs.
adversarial examples are most successful in white-box settings where the model and its parameters are available.
We propose the Direction-Aggregated adversarial attacks that deliver transferable adversarial examples.
arXiv Detail & Related papers (2021-04-19T09:54:56Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z) - Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks [54.82488484053263]
Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
arXiv Detail & Related papers (2020-02-12T17:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.