Deep Contextual Video Compression
- URL: http://arxiv.org/abs/2109.15047v1
- Date: Thu, 30 Sep 2021 12:14:24 GMT
- Title: Deep Contextual Video Compression
- Authors: Jiahao Li, Bin Li, Yan Lu
- Abstract summary: We propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding.
Our method can significantly outperform the previous state-of-theart (SOTA) deep video compression methods.
- Score: 20.301569390401102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the existing neural video compression methods adopt the predictive
coding framework, which first generates the predicted frame and then encodes
its residue with the current frame. However, as for compression ratio,
predictive coding is only a sub-optimal solution as it uses simple subtraction
operation to remove the redundancy across frames. In this paper, we propose a
deep contextual video compression framework to enable a paradigm shift from
predictive coding to conditional coding. In particular, we try to answer the
following questions: how to define, use, and learn condition under a deep video
compression framework. To tap the potential of conditional coding, we propose
using feature domain context as condition. This enables us to leverage the high
dimension context to carry rich information to both the encoder and the
decoder, which helps reconstruct the high-frequency contents for higher video
quality. Our framework is also extensible, in which the condition can be
flexibly designed. Experiments show that our method can significantly
outperform the previous state-of-the-art (SOTA) deep video compression methods.
When compared with x265 using veryslow preset, we can achieve 26.0% bitrate
saving for 1080P standard test videos.
Related papers
- Accelerating Learned Video Compression via Low-Resolution Representation Learning [18.399027308582596]
We introduce an efficiency-optimized framework for learned video compression that focuses on low-resolution representation learning.
Our method achieves performance levels on par with the low-decay P configuration of the H.266 reference software VTM.
arXiv Detail & Related papers (2024-07-23T12:02:57Z) - Predictive Coding For Animation-Based Video Compression [13.161311799049978]
We propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame.
Our experiments indicate a significant gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC.
arXiv Detail & Related papers (2023-07-09T14:40:54Z) - Advancing Learned Video Compression with In-loop Frame Prediction [177.67218448278143]
In this paper, we propose an Advanced Learned Video Compression (ALVC) approach with the in-loop frame prediction module.
The predicted frame can serve as a better reference than the previously compressed frame, and therefore it benefits the compression performance.
The experiments show the state-of-the-art performance of our ALVC approach in learned video compression.
arXiv Detail & Related papers (2022-11-13T19:53:14Z) - Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed
Video Quality Enhancement [74.1052624663082]
We develop a deep learning architecture capable of restoring detail to compressed videos.
We show that this improves restoration accuracy compared to prior compression correction methods.
We condition our model on quantization data which is readily available in the bitstream.
arXiv Detail & Related papers (2022-01-31T18:56:04Z) - Microdosing: Knowledge Distillation for GAN based Compression [18.140328230701233]
We show how to leverage knowledge distillation to obtain equally capable image decoders at a fraction of the original number of parameters.
This allows us to reduce the model size by a factor of 20 and to achieve 50% reduction in decoding time.
arXiv Detail & Related papers (2022-01-07T14:27:16Z) - COMISR: Compression-Informed Video Super-Resolution [76.94152284740858]
Most videos on the web or mobile devices are compressed, and the compression can be severe when the bandwidth is limited.
We propose a new compression-informed video super-resolution model to restore high-resolution content without introducing artifacts caused by compression.
arXiv Detail & Related papers (2021-05-04T01:24:44Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z) - Learning for Video Compression with Hierarchical Quality and Recurrent
Enhancement [164.7489982837475]
We propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network.
In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides.
arXiv Detail & Related papers (2020-03-04T09:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.