Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks
- URL: http://arxiv.org/abs/2002.05123v4
- Date: Fri, 4 Jun 2021 22:11:54 GMT
- Title: Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks
- Authors: Roi Pony, Itay Naeh, Shie Mannor
- Abstract summary: Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
- Score: 54.82488484053263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks for video classification, just like image classification
networks, may be subjected to adversarial manipulation. The main difference
between image classifiers and video classifiers is that the latter usually use
temporal information contained within the video. In this work we present a
manipulation scheme for fooling video classifiers by introducing a flickering
temporal perturbation that in some cases may be unnoticeable by human observers
and is implementable in the real world. After demonstrating the manipulation of
action classification of single videos, we generalize the procedure to make
universal adversarial perturbation, achieving high fooling ratio. In addition,
we generalize the universal perturbation and produce a temporal-invariant
perturbation, which can be applied to the video without synchronizing the
perturbation to the input. The attack was implemented on several target models
and the transferability of the attack was demonstrated. These properties allow
us to bridge the gap between simulated environment and real-world application,
as will be demonstrated in this paper for the first time for an over-the-air
flickering attack.
Related papers
- Adversarial Attacks on Video Object Segmentation with Hard Region
Discovery [31.882369005280793]
Video object segmentation has been applied to various computer vision tasks, such as video editing, autonomous driving, and human-robot interaction.
Deep neural networks are vulnerable to adversarial examples, which are the inputs attacked by almost human-imperceptible perturbations.
This will rise the security issues in highly-demanding tasks because small perturbations to the input video will result in potential attack risks.
arXiv Detail & Related papers (2023-09-25T03:52:15Z) - Latent Spatiotemporal Adaptation for Generalized Face Forgery Video Detection [22.536129731902783]
We propose a Latemporal Spatio(LAST) approach to facilitate generalized face video detection.
We first model thetemporal patterns face videos by incorporating a lightweight CNN to extract local spatial features of each frame.
Then we learn the long-termtemporal representations in latent space videos, which should contain more clues than in pixel space.
arXiv Detail & Related papers (2023-09-09T13:40:44Z) - Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for
Visible-Infrared Video Person Re-Identification [24.9205771457704]
The paper proposes a new visible-infrared video person re-ID method from a novel perspective, i.e., adversarial self-attack defense and spatial-temporal relation mining.
The proposed method exhibits compelling performance on large-scale cross-modality video datasets.
arXiv Detail & Related papers (2023-07-08T05:03:10Z) - Boosting the Transferability of Video Adversarial Examples via Temporal
Translation [82.0745476838865]
adversarial examples are transferable, which makes them feasible for black-box attacks in real-world applications.
We introduce a temporal translation attack method, which optimize the adversarial perturbations over a set of temporal translated video clips.
Experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.
arXiv Detail & Related papers (2021-10-18T07:52:17Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion
Retargeting [53.28477676794658]
unsupervised motion in videos has seen substantial advancements through the use of deep neural networks.
We introduce JOKR - a JOint Keypoint Representation that handles both the source and target videos, without requiring any object prior or data collection.
We evaluate our method both qualitatively and quantitatively, and demonstrate that our method handles various cross-domain scenarios, such as different animals, different flowers, and humans.
arXiv Detail & Related papers (2021-06-17T17:32:32Z) - Double Targeted Universal Adversarial Perturbations [83.60161052867534]
We introduce a double targeted universal adversarial perturbations (DT-UAPs) to bridge the gap between the instance-discriminative image-dependent perturbations and the generic universal perturbations.
We show the effectiveness of the proposed DTA algorithm on a wide range of datasets and also demonstrate its potential as a physical attack.
arXiv Detail & Related papers (2020-10-07T09:08:51Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.