Adversarial Attacks on Deep Learning-based Video Compression and
Classification Systems
- URL: http://arxiv.org/abs/2203.10183v1
- Date: Fri, 18 Mar 2022 22:42:20 GMT
- Title: Adversarial Attacks on Deep Learning-based Video Compression and
Classification Systems
- Authors: Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar
- Abstract summary: We conduct the first systematic study for adversarial attacks on deep learning based video compression and downstream classification systems.
We propose an adaptive adversarial attack that can manipulate the Rate-Distortion relationship of a video compression model to achieve two adversarial goals.
We also devise novel objectives for targeted and untargeted attacks to a downstream video classification service.
- Score: 23.305818640220554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video compression plays a crucial role in enabling video streaming and
classification systems and maximizing the end-user quality of experience (QoE)
at a given bandwidth budget. In this paper, we conduct the first systematic
study for adversarial attacks on deep learning based video compression and
downstream classification systems. We propose an adaptive adversarial attack
that can manipulate the Rate-Distortion (R-D) relationship of a video
compression model to achieve two adversarial goals: (1) increasing the network
bandwidth or (2) degrading the video quality for end-users. We further devise
novel objectives for targeted and untargeted attacks to a downstream video
classification service. Finally, we design an input-invariant perturbation that
universally disrupts video compression and classification systems in real time.
Unlike previously proposed attacks on video classification, our adversarial
perturbations are the first to withstand compression. We empirically show the
resilience of our attacks against various defenses, i.e., adversarial training,
video denoising, and JPEG compression. Our extensive experimental results on
various video datasets demonstrate the effectiveness of our attacks. Our video
quality and bandwidth attacks deteriorate peak signal-to-noise ratio by up to
5.4dB and the bit-rate by up to 2.4 times on the standard video compression
datasets while achieving over 90% attack success rate on a downstream
classifier.
Related papers
- Perceptual Quality Improvement in Videoconferencing using
Keyframes-based GAN [28.773037051085318]
We propose a novel GAN-based method for compression artifacts reduction in videoconferencing.
First, we extract multi-scale features from the compressed and reference frames.
Then, our architecture combines these features in a progressive manner according to facial landmarks.
arXiv Detail & Related papers (2023-11-07T16:38:23Z) - NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video
Compression [19.88538977373161]
Deep learning-based video compression methods are replacing traditional algorithms and providing state-of-the-art results on edge devices.
We present a real-world LED attack crafted to target video compression frameworks.
Our physically realizable attack, dubbed NetFlick, can degrade temporal correlation between successive frames by injecting flickering temporal perturbations.
In addition, we propose universal perturbations that can downgrade performance of incoming video without prior knowledge of the contents.
arXiv Detail & Related papers (2023-04-04T01:29:51Z) - StyleFool: Fooling Video Classification Systems via Style Transfer [28.19682215735232]
StyleFool is a black-box video adversarial attack via style transfer to fool the video classification system.
StyleFool outperforms the state-of-the-art adversarial attacks in terms of the number of queries and the robustness against existing defenses.
arXiv Detail & Related papers (2022-03-30T02:18:16Z) - Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed
Video Quality Enhancement [74.1052624663082]
We develop a deep learning architecture capable of restoring detail to compressed videos.
We show that this improves restoration accuracy compared to prior compression correction methods.
We condition our model on quantization data which is readily available in the bitstream.
arXiv Detail & Related papers (2022-01-31T18:56:04Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - Boosting the Transferability of Video Adversarial Examples via Temporal
Translation [82.0745476838865]
adversarial examples are transferable, which makes them feasible for black-box attacks in real-world applications.
We introduce a temporal translation attack method, which optimize the adversarial perturbations over a set of temporal translated video clips.
Experiments on the Kinetics-400 dataset and the UCF-101 dataset demonstrate that our method can significantly boost the transferability of video adversarial examples.
arXiv Detail & Related papers (2021-10-18T07:52:17Z) - Perceptual Learned Video Compression with Recurrent Conditional GAN [158.0726042755]
We propose a Perceptual Learned Video Compression (PLVC) approach with recurrent conditional generative adversarial network.
PLVC learns to compress video towards good perceptual quality at low bit-rate.
The user study further validates the outstanding perceptual performance of PLVC in comparison with the latest learned video compression approaches.
arXiv Detail & Related papers (2021-09-07T13:36:57Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z) - Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks [54.82488484053263]
Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
arXiv Detail & Related papers (2020-02-12T17:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.