PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos
- URL: http://arxiv.org/abs/2109.05695v1
- Date: Mon, 13 Sep 2021 04:05:46 GMT
- Title: PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos
- Authors: Nupur Thakur, Baoxin Li
- Abstract summary: We propose a novel yet simple algorithm called Pseudo-versa-Adrial Training (PAT) to detect the adversarial frames in a video without requiring knowledge of the attack.
Experimental results on UCF-101 and 20BN-Jester datasets show that PAT can detect the adversarial video frames and videos with a high detection rate.
- Score: 20.949656274807904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive research has demonstrated that deep neural networks (DNNs) are
prone to adversarial attacks. Although various defense mechanisms have been
proposed for image classification networks, fewer approaches exist for
video-based models that are used in security-sensitive applications like
surveillance. In this paper, we propose a novel yet simple algorithm called
Pseudo-Adversarial Training (PAT), to detect the adversarial frames in a video
without requiring knowledge of the attack. Our approach generates `transition
frames' that capture critical deviation from the original frames and eliminate
the components insignificant to the detection task. To avoid the necessity of
knowing the attack model, we produce `pseudo perturbations' to train our
detection network. Adversarial detection is then achieved through the use of
the detected frames. Experimental results on UCF-101 and 20BN-Jester datasets
show that PAT can detect the adversarial video frames and videos with a high
detection rate. We also unveil the potential reasons for the effectiveness of
the transition frames and pseudo perturbations through extensive experiments.
Related papers
- Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - NSNet: Non-saliency Suppression Sampler for Efficient Video Recognition [89.84188594758588]
A novel Non-saliency Suppression Network (NSNet) is proposed to suppress the responses of non-salient frames.
NSNet achieves the state-of-the-art accuracy-efficiency trade-off and presents a significantly faster (2.44.3x) practical inference speed than state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T09:41:22Z) - Temporal Shuffling for Defending Deep Action Recognition Models against
Adversarial Attacks [67.58887471137436]
We develop a novel defense method using temporal shuffling of input videos against adversarial attacks for action recognition models.
To the best of our knowledge, this is the first attempt to design a defense method without additional training for 3D CNN-based video action recognition models.
arXiv Detail & Related papers (2021-12-15T06:57:01Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - Frame-rate Up-conversion Detection Based on Convolutional Neural Network
for Learning Spatiotemporal Features [7.895528973776606]
This paper proposes a frame-rate conversion detection network (FCDNet) that learns forensic features caused by FRUC in an end-to-end fashion.
FCDNet uses a stack of consecutive frames as the input and effectively learns artifacts using network blocks to learn features.
arXiv Detail & Related papers (2021-03-25T08:47:46Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Enhanced Few-shot Learning for Intrusion Detection in Railway Video
Surveillance [16.220077781635748]
An enhanced model-agnostic meta-learner is trained using both the original video frames and segmented masks of track area extracted from the video.
Numerical results show that the enhanced meta-learner successfully adapts unseen scene with only few newly collected video frame samples.
arXiv Detail & Related papers (2020-11-09T08:59:15Z) - Robust Unsupervised Video Anomaly Detection by Multi-Path Frame
Prediction [61.17654438176999]
We propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design.
Our proposed method obtains the frame-level AUROC score of 88.3% on the CUHK Avenue dataset.
arXiv Detail & Related papers (2020-11-05T11:34:12Z) - Detecting Forged Facial Videos using convolutional neural network [0.0]
We propose to use smaller (fewer parameters to learn) convolutional neural networks (CNN) for a data-driven approach to forged video detection.
To validate our approach, we investigate the FaceForensics public dataset detailing both frame-based and video-based results.
arXiv Detail & Related papers (2020-05-17T19:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.