Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power
of Geometric Transformations
- URL: http://arxiv.org/abs/2110.01823v1
- Date: Tue, 5 Oct 2021 05:05:59 GMT
- Title: Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power
of Geometric Transformations
- Authors: Shasha Li, Abhishek Aich, Shitong Zhu, M. Salman Asif, Chengyu Song,
Amit K. Roy-Chowdhury, Srikanth Krishnamurthy
- Abstract summary: Black-box adversarial attacks against video classification models have been largely understudied.
In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space.
Our algorithm inherently leads to successful perturbations with surprisingly few queries.
- Score: 49.06194223213629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When compared to the image classification models, black-box adversarial
attacks against video classification models have been largely understudied.
This could be possible because, with video, the temporal dimension poses
significant additional challenges in gradient estimation. Query-efficient
black-box attacks rely on effectively estimated gradients towards maximizing
the probability of misclassifying the target video. In this work, we
demonstrate that such effective gradients can be searched for by parameterizing
the temporal structure of the search space with geometric transformations.
Specifically, we design a novel iterative algorithm Geometric TRAnsformed
Perturbations (GEO-TRAP), for attacking video classification models. GEO-TRAP
employs standard geometric transformation operations to reduce the search space
for effective gradients into searching for a small group of parameters that
define these operations. This group of parameters describes the geometric
progression of gradients, resulting in a reduced and structured search space.
Our algorithm inherently leads to successful perturbations with surprisingly
few queries. For example, adversarial examples generated from GEO-TRAP have
better attack success rates with ~73.55% fewer queries compared to the
state-of-the-art method for video adversarial attacks on the widely used Jester
dataset. Overall, our algorithm exposes vulnerabilities of diverse video
classification models and achieves new state-of-the-art results under black-box
settings on two large datasets.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Gradient Aligned Attacks via a Few Queries [18.880398046794138]
Black-box query attacks show low performance in a novel scenario where only a few queries are allowed.
We propose gradient aligned attacks (GAA) which use the gradient aligned losses to improve the attack performance on the victim model.
Our proposed gradient aligned attacks and losses show significant improvements in the attack performance and query efficiency of black-box query attacks.
arXiv Detail & Related papers (2022-05-19T12:32:20Z) - Geometrically Adaptive Dictionary Attack on Face Recognition [23.712389625037442]
We propose a strategy for query-efficient black-box attacks on face recognition.
Our core idea is to create an adversarial perturbation in the UV texture map and project it onto the face in the image.
We show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets.
arXiv Detail & Related papers (2021-11-08T10:26:28Z) - Meta Gradient Adversarial Attack [64.5070788261061]
This paper proposes a novel architecture called Metaversa Gradient Adrial Attack (MGAA), which is plug-and-play and can be integrated with any existing gradient-based attack method.
Specifically, we randomly sample multiple models from a model zoo to compose different tasks and iteratively simulate a white-box attack and a black-box attack in each task.
By narrowing the gap between the gradient directions in white-box and black-box attacks, the transferability of adversarial examples on the black-box setting can be improved.
arXiv Detail & Related papers (2021-08-09T17:44:19Z) - Adversarial examples attack based on random warm restart mechanism and
improved Nesterov momentum [0.0]
Some studies have pointed out that the deep learning model is vulnerable to attacks adversarial examples and makes false decisions.
We propose RWR-NM-PGD attack algorithm based on random warm restart mechanism and improved Nesterov momentum.
Our method has average attack success rate of 46.3077%, which is 27.19% higher than I-FGSM and 9.27% higher than PGD.
arXiv Detail & Related papers (2021-05-10T07:24:25Z) - Boosting Gradient for White-Box Adversarial Attacks [60.422511092730026]
We propose a universal adversarial example generation method, called ADV-ReLU, to enhance the performance of gradient based white-box attack algorithms.
Our approach calculates the gradient of the loss function versus network input, maps the values to scores, and selects a part of them to update the misleading gradients.
arXiv Detail & Related papers (2020-10-21T02:13:26Z) - Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial
Attacks [86.88061841975482]
We study the problem of generating adversarial examples in a black-box setting, where we only have access to a zeroth order oracle.
We use this setting to find fast one-step adversarial attacks, akin to a black-box version of the Fast Gradient Sign Method(FGSM)
We show that the method uses fewer queries and achieves higher attack success rates than the current state of the art.
arXiv Detail & Related papers (2020-10-08T18:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.