Sparse Black-box Video Attack with Reinforcement Learning
- URL: http://arxiv.org/abs/2001.03754v3
- Date: Fri, 11 Mar 2022 14:41:21 GMT
- Title: Sparse Black-box Video Attack with Reinforcement Learning
- Authors: Xingxing Wei, Huanqian Yan, and Bo Li
- Abstract summary: We formulate the black-box video attacks into a Reinforcement Learning framework.
The environment in RL is set as the recognition model, and the agent in RL plays the role of frame selecting.
We conduct a series of experiments with two mainstream video recognition models.
- Score: 14.624074868199287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks on video recognition models have been explored recently.
However, most existing works treat each video frame equally and ignore their
temporal interactions. To overcome this drawback, a few methods try to select
some key frames and then perform attacks based on them. Unfortunately, their
selection strategy is independent of the attacking step, therefore the
resulting performance is limited. Instead, we argue the frame selection phase
is closely relevant with the attacking phase. The key frames should be adjusted
according to the attacking results. For that, we formulate the black-box video
attacks into a Reinforcement Learning (RL) framework. Specifically, the
environment in RL is set as the recognition model, and the agent in RL plays
the role of frame selecting. By continuously querying the recognition models
and receiving the attacking feedback, the agent gradually adjusts its frame
selection strategy and adversarial perturbations become smaller and smaller. We
conduct a series of experiments with two mainstream video recognition models:
C3D and LRCN on the public UCF-101 and HMDB-51 datasets. The results
demonstrate that the proposed method can significantly reduce the adversarial
perturbations with efficient query times.
Related papers
- Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Inter-frame Accelerate Attack against Video Interpolation Models [73.28751441626754]
We apply adversarial attacks to VIF models and find that the VIF models are very vulnerable to adversarial examples.
We propose a novel attack method named Inter-frame Accelerate Attack (IAA) thats the iterations as the perturbation for the previous adjacent frame.
It is shown that our method can improve attack efficiency greatly while achieving comparable attack performance with traditional methods.
arXiv Detail & Related papers (2023-05-11T03:08:48Z) - Efficient Decision-based Black-box Patch Attacks on Video Recognition [33.5640770588839]
This work first explores decision-based patch attacks on video models.
To achieve a query-efficient attack, we propose a spatial-temporal differential evolution framework.
STDE has demonstrated state-of-the-art performance in terms of threat, efficiency and imperceptibility.
arXiv Detail & Related papers (2023-03-21T15:08:35Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - Boosting the Transferability of Video Adversarial Examples via Temporal
Translation [82.0745476838865]
adversarial examples are transferable, which makes them feasible for black-box attacks in real-world applications.
We introduce a temporal translation attack method, which optimize the adversarial perturbations over a set of temporal translated video clips.
Experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.
arXiv Detail & Related papers (2021-10-18T07:52:17Z) - Reinforcement Learning Based Sparse Black-box Adversarial Attack on
Video Recognition Models [3.029434408969759]
Black-box adversarial attacks are only performed on selected key regions and key frames.
We propose a reinforcement learning based frame selection strategy to speed up the attack process.
A range of empirical results on real datasets demonstrate the effectiveness and efficiency of the proposed method.
arXiv Detail & Related papers (2021-08-29T12:22:40Z) - Semi-Supervised Action Recognition with Temporal Contrastive Learning [50.08957096801457]
We learn a two-pathway temporal contrastive model using unlabeled videos at two different speeds.
We considerably outperform video extensions of sophisticated state-of-the-art semi-supervised image recognition methods.
arXiv Detail & Related papers (2021-02-04T17:28:35Z) - Defending Against Multiple and Unforeseen Adversarial Videos [71.94264837503135]
We propose one of the first defense strategies against multiple types of adversarial videos for video recognition.
The proposed method, referred to as MultiBN, performs adversarial training on multiple video types using multiple independent batch normalization layers.
With a multiple BN structure, each BN brach is responsible for learning the distribution of a single perturbation type and thus provides more precise distribution estimations.
arXiv Detail & Related papers (2020-09-11T06:07:14Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.