Overcomplete Representations Against Adversarial Videos
- URL: http://arxiv.org/abs/2012.04262v1
- Date: Tue, 8 Dec 2020 08:00:17 GMT
- Title: Overcomplete Representations Against Adversarial Videos
- Authors: Shao-Yuan Lo, Jeya Maria Jose Valanarasu, Vishal M. Patel
- Abstract summary: We propose a novel Over-and-Under complete restoration network for Defending against adversarial videos (OUDefend)
OUDefend is designed to balance local and global features by learning those two representations.
Experimental results show that the defenses focusing on images may be ineffective to videos, while OUDefend enhances robustness against different types of adversarial videos.
- Score: 72.04912755926524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness of deep neural networks is an extensively studied
problem in the literature and various methods have been proposed to defend
against adversarial images. However, only a handful of defense methods have
been developed for defending against attacked videos. In this paper, we propose
a novel Over-and-Under complete restoration network for Defending against
adversarial videos (OUDefend). Most restoration networks adopt an
encoder-decoder architecture that first shrinks spatial dimension then expands
it back. This approach learns undercomplete representations, which have large
receptive fields to collect global information but overlooks local details. On
the other hand, overcomplete representations have opposite properties. Hence,
OUDefend is designed to balance local and global features by learning those two
representations. We attach OUDefend to target video recognition models as a
feature restoration block and train the entire network end-to-end. Experimental
results show that the defenses focusing on images may be ineffective to videos,
while OUDefend enhances robustness against different types of adversarial
videos, ranging from additive attacks, multiplicative attacks to physically
realizable attacks.
Related papers
- Query-Efficient Video Adversarial Attack with Stylized Logo [17.268709979991996]
Video classification systems based on Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples.
We propose a novel black-box video attack framework, called Stylized Logo Attack (SLA)
SLA is conducted through three steps. The first step involves building a style references set for logos, which can not only make the generated examples more natural, but also carry more target class features in the targeted attacks.
arXiv Detail & Related papers (2024-08-22T03:19:09Z) - Improving Adversarial Robustness via Decoupled Visual Representation Masking [65.73203518658224]
In this paper, we highlight two novel properties of robust features from the feature distribution perspective.
We find that state-of-the-art defense methods aim to address both of these mentioned issues well.
Specifically, we propose a simple but effective defense based on decoupled visual representation masking.
arXiv Detail & Related papers (2024-06-16T13:29:41Z) - DreaMo: Articulated 3D Reconstruction From A Single Casual Video [59.87221439498147]
We study articulated 3D shape reconstruction from a single and casually captured internet video, where the subject's view coverage is incomplete.
DreaMo shows promising quality in novel-view rendering, detailed articulated shape reconstruction, and skeleton generation.
arXiv Detail & Related papers (2023-12-05T09:47:37Z) - Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Defending Against Person Hiding Adversarial Patch Attack with a
Universal White Frame [28.128458352103543]
High-performance object detection networks are vulnerable to adversarial patch attacks.
Person-hiding attacks are emerging as a serious problem in many safety-critical applications.
We propose a novel defense strategy that mitigates a person-hiding attack by optimizing defense patterns.
arXiv Detail & Related papers (2022-04-27T15:18:08Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - Sparse Coding Frontend for Robust Neural Networks [11.36192454455449]
Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations.
Current defense methods against these adversarial attacks are variants of adversarial training.
In this paper, we introduce a radically different defense based on a sparse coding based on clean images.
arXiv Detail & Related papers (2021-04-12T11:14:32Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks [54.82488484053263]
Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
arXiv Detail & Related papers (2020-02-12T17:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.