Boosting Adversarial Transferability using Dynamic Cues
- URL: http://arxiv.org/abs/2302.12252v2
- Date: Tue, 4 Apr 2023 19:46:08 GMT
- Title: Boosting Adversarial Transferability using Dynamic Cues
- Authors: Muzammal Naseer, Ahmad Mahmood, Salman Khan, and Fahad Khan
- Abstract summary: We introduce spatial (image) and temporal (video) cues within the same source model through task-specific prompts.
Our attack results indicate that the attacker does not need specialized architectures.
Image models are effective surrogates to optimize an adversarial attack to fool black-box models in a changing environment.
- Score: 15.194437322391558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transferability of adversarial perturbations between image models has
been extensively studied. In this case, an attack is generated from a known
surrogate \eg, the ImageNet trained model, and transferred to change the
decision of an unknown (black-box) model trained on an image dataset. However,
attacks generated from image models do not capture the dynamic nature of a
moving object or a changing scene due to a lack of temporal cues within image
models. This leads to reduced transferability of adversarial attacks from
representation-enriched \emph{image} models such as Supervised Vision
Transformers (ViTs), Self-supervised ViTs (\eg, DINO), and Vision-language
models (\eg, CLIP) to black-box \emph{video} models. In this work, we induce
dynamic cues within the image models without sacrificing their original
performance on images. To this end, we optimize \emph{temporal prompts} through
frozen image models to capture motion dynamics. Our temporal prompts are the
result of a learnable transformation that allows optimizing for temporal
gradients during an adversarial attack to fool the motion dynamics.
Specifically, we introduce spatial (image) and temporal (video) cues within the
same source model through task-specific prompts. Attacking such prompts
maximizes the adversarial transferability from image-to-video and
image-to-image models using the attacks designed for image models. Our attack
results indicate that the attacker does not need specialized architectures,
\eg, divided space-time attention, 3D convolutions, or multi-view convolution
networks for different data modalities. Image models are effective surrogates
to optimize an adversarial attack to fool black-box models in a changing
environment over time. Code is available at https://bit.ly/3Xd9gRQ
Related papers
- Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - Pix2Gif: Motion-Guided Diffusion for GIF Generation [70.64240654310754]
We present Pix2Gif, a motion-guided diffusion model for image-to-GIF (video) generation.
We propose a new motion-guided warping module to spatially transform the features of the source image conditioned on the two types of prompts.
In preparation for the model training, we meticulously curated data by extracting coherent image frames from the TGIF video-caption dataset.
arXiv Detail & Related papers (2024-03-07T16:18:28Z) - VQAttack: Transferable Adversarial Attacks on Visual Question Answering
via Pre-trained Models [58.21452697997078]
We propose a novel VQAttack model, which can generate both image and text perturbations with the designed modules.
Experimental results on two VQA datasets with five validated models demonstrate the effectiveness of the proposed VQAttack.
arXiv Detail & Related papers (2024-02-16T21:17:42Z) - Learning When to Use Adaptive Adversarial Image Perturbations against
Autonomous Vehicles [0.0]
Deep neural network (DNN) models for object detection are susceptible to adversarial image perturbations.
We propose a multi-level optimization framework that monitors an attacker's capability of generating the adversarial perturbations.
We show our method's capability to generate the image attack in real-time while monitoring when the attacker is proficient given state estimates.
arXiv Detail & Related papers (2022-12-28T02:36:58Z) - Adversarial Pixel Restoration as a Pretext Task for Transferable
Perturbations [54.1807206010136]
Transferable adversarial attacks optimize adversaries from a pretrained surrogate model and known label space to fool the unknown black-box models.
We propose Adversarial Pixel Restoration as a self-supervised alternative to train an effective surrogate model from scratch.
Our training approach is based on a min-max objective which reduces overfitting via an adversarial objective.
arXiv Detail & Related papers (2022-07-18T17:59:58Z) - Frequency Domain Model Augmentation for Adversarial Attack [91.36850162147678]
For black-box attacks, the gap between the substitute model and the victim model is usually large.
We propose a novel spectrum simulation attack to craft more transferable adversarial examples against both normally trained and defense models.
arXiv Detail & Related papers (2022-07-12T08:26:21Z) - Cross-Modal Transferable Adversarial Attacks from Images to Videos [82.0745476838865]
Recent studies have shown that adversarial examples hand-crafted on one white-box model can be used to attack other black-box models.
We propose a simple yet effective cross-modal attack method, named as Image To Video (I2V) attack.
I2V generates adversarial frames by minimizing the cosine similarity between features of pre-trained image models from adversarial and benign examples.
arXiv Detail & Related papers (2021-12-10T08:19:03Z) - Conditional Adversarial Camera Model Anonymization [11.98237992824422]
The model of camera that was used to capture a particular photographic image (model attribution) is typically inferred from high-frequency model-specific artifacts.
We propose a conditional adversarial approach for learning such transformations.
arXiv Detail & Related papers (2020-02-18T18:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.