Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior
- URL: http://arxiv.org/abs/2003.07637v2
- Date: Tue, 6 Oct 2020 01:37:47 GMT
- Title: Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior
- Authors: Hu Zhang, Linchao Zhu, Yi Zhu and Yi Yang
- Abstract summary: We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
- Score: 63.11478060678794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are known to be susceptible to adversarial noise, which
are tiny and imperceptible perturbations. Most of previous work on adversarial
attack mainly focus on image models, while the vulnerability of video models is
less explored. In this paper, we aim to attack video models by utilizing
intrinsic movement pattern and regional relative motion among video frames. We
propose an effective motion-excited sampler to obtain motion-aware noise prior,
which we term as sparked prior. Our sparked prior underlines frame correlations
and utilizes video dynamics via relative motion. By using the sparked prior in
gradient estimation, we can successfully attack a variety of video
classification models with fewer number of queries. Extensive experimental
results on four benchmark datasets validate the efficacy of our proposed
method.
Related papers
- MULDE: Multiscale Log-Density Estimation via Denoising Score Matching for Video Anomaly Detection [15.72443573134312]
We treat feature vectors extracted from videos as realizations of a random variable with a fixed distribution.
We train our video anomaly detector using a modification of denoising score matching.
Our experiments on five popular video anomaly detection benchmarks demonstrate state-of-the-art performance.
arXiv Detail & Related papers (2024-03-21T15:46:19Z) - MotionMix: Weakly-Supervised Diffusion for Controllable Motion
Generation [19.999239668765885]
MotionMix is a weakly-supervised diffusion model that leverages both noisy and unannotated motion sequences.
Our framework consistently achieves state-of-the-art performances on text-to-motion, action-to-motion, and music-to-dance tasks.
arXiv Detail & Related papers (2024-01-20T04:58:06Z) - Human Kinematics-inspired Skeleton-based Video Anomaly Detection [3.261881784285304]
We introduce a new idea called HKVAD (Human Kinematic-inspired Video Anomaly Detection) for video anomaly detection.
Our method achieves good results with minimal computational resources, validating its effectiveness and potential.
arXiv Detail & Related papers (2023-09-27T13:52:53Z) - Inter-frame Accelerate Attack against Video Interpolation Models [73.28751441626754]
We apply adversarial attacks to VIF models and find that the VIF models are very vulnerable to adversarial examples.
We propose a novel attack method named Inter-frame Accelerate Attack (IAA) thats the iterations as the perturbation for the previous adjacent frame.
It is shown that our method can improve attack efficiency greatly while achieving comparable attack performance with traditional methods.
arXiv Detail & Related papers (2023-05-11T03:08:48Z) - Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks [67.58887471137436]
We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
arXiv Detail & Related papers (2021-12-15T06:57:01Z) - Boosting the Transferability of Video Adversarial Examples via Temporal
Translation [82.0745476838865]
adversarial examples are transferable, which makes them feasible for black-box attacks in real-world applications.
We introduce a temporal translation attack method, which optimize the adversarial perturbations over a set of temporal translated video clips.
Experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.
arXiv Detail & Related papers (2021-10-18T07:52:17Z) - CDN-MEDAL: Two-stage Density and Difference Approximation Framework for
Motion Analysis [3.337126420148156]
We propose a novel, two-stage method of change detection with two convolutional neural networks.
Our two-stage framework contains approximately 3.5K parameters in total but still maintains rapid convergence to intricate motion patterns.
arXiv Detail & Related papers (2021-06-07T16:39:42Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks [54.82488484053263]
Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
arXiv Detail & Related papers (2020-02-12T17:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.