Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression
- URL: http://arxiv.org/abs/2207.05894v1
- Date: Wed, 13 Jul 2022 00:03:54 GMT
- Title: Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression
- Authors: Jiahao Li, Bin Li, Yan Lu
- Abstract summary: This paper proposes a powerful entropy model which efficiently captures both spatial and temporal dependencies.
Our entropy model can achieve 18.2% saving on UVG dataset when compared with H266 (VTM) using the highest compression ratio.
- Score: 25.96187914295921
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For neural video codec, it is critical, yet challenging, to design an
efficient entropy model which can accurately predict the probability
distribution of the quantized latent representation. However, most existing
video codecs directly use the ready-made entropy model from image codec to
encode the residual or motion, and do not fully leverage the spatial-temporal
characteristics in video. To this end, this paper proposes a powerful entropy
model which efficiently captures both spatial and temporal dependencies. In
particular, we introduce the latent prior which exploits the correlation among
the latent representation to squeeze the temporal redundancy. Meanwhile, the
dual spatial prior is proposed to reduce the spatial redundancy in a
parallel-friendly manner. In addition, our entropy model is also versatile.
Besides estimating the probability distribution, our entropy model also
generates the quantization step at spatial-channel-wise. This content-adaptive
quantization mechanism not only helps our codec achieve the smooth rate
adjustment in single model but also improves the final rate-distortion
performance by dynamic bit allocation. Experimental results show that, powered
by the proposed entropy model, our neural codec can achieve 18.2% bitrate
saving on UVG dataset when compared with H.266 (VTM) using the highest
compression ratio configuration. It makes a new milestone in the development of
neural video codec. The codes are at https://github.com/microsoft/DCVC.
Related papers
- SIGMA:Sinkhorn-Guided Masked Video Modeling [69.31715194419091]
Sinkhorn-guided Masked Video Modelling ( SIGMA) is a novel video pretraining method.
We distribute features of space-time tubes evenly across a limited number of learnable clusters.
Experimental results on ten datasets validate the effectiveness of SIGMA in learning more performant, temporally-aware, and robust video representations.
arXiv Detail & Related papers (2024-07-22T08:04:09Z) - Frequency Disentangled Features in Neural Image Compression [13.016298207860974]
A neural image compression network is governed by how well the entropy model matches the true distribution of the latent code.
In this paper, we propose a feature-level frequency disentanglement to help the relaxed scalar quantization achieve lower bit rates.
The proposed network not only outperforms hand-engineered codecs, but also neural network-based codecs built on-heavy spatially autoregressive entropy models.
arXiv Detail & Related papers (2023-08-04T14:55:44Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Video Probabilistic Diffusion Models in Projected Latent Space [75.4253202574722]
We propose a novel generative model for videos, coined projected latent video diffusion models (PVDM)
PVDM learns a video distribution in a low-dimensional latent space and thus can be efficiently trained with high-resolution videos under limited resources.
arXiv Detail & Related papers (2023-02-15T14:22:34Z) - Entroformer: A Transformer-based Entropy Model for Learned Image
Compression [17.51693464943102]
We propose a novel transformer-based entropy model, termed Entroformer, to capture long-range dependencies in probability distribution estimation.
The experiments show that the Entroformer achieves state-of-the-art performance on image compression while being time-efficient.
arXiv Detail & Related papers (2022-02-11T08:03:31Z) - Instance-Adaptive Video Compression: Improving Neural Codecs by Training
on the Test Set [14.89208053104896]
We introduce a video compression algorithm based on instance-adaptive learning.
On each video sequence to be transmitted, we finetune a pretrained compression model.
We show that it enables a competitive performance even after reducing the network size by 70%.
arXiv Detail & Related papers (2021-11-19T16:25:34Z) - Overfitting for Fun and Profit: Instance-Adaptive Data Compression [20.764189960709164]
Neural data compression has been shown to outperform classical methods in terms of $RD$ performance.
In this paper we take this concept to the extreme, adapting the full model to a single video, and sending model updates along with the latent representation.
We demonstrate that full-model adaptation improves $RD$ performance by 1 dB, with respect to encoder-only finetuning.
arXiv Detail & Related papers (2021-01-21T15:58:58Z) - Causal Contextual Prediction for Learned Image Compression [36.08393281509613]
We propose the concept of separate entropy coding to leverage a serial decoding process for causal contextual entropy prediction in the latent space.
A causal context model is proposed that separates the latents across channels and makes use of cross-channel relationships to generate highly informative contexts.
We also propose a causal global prediction model, which is able to find global reference points for accurate predictions of unknown points.
arXiv Detail & Related papers (2020-11-19T08:15:10Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Learning for Video Compression with Recurrent Auto-Encoder and Recurrent
Probability Model [164.7489982837475]
This paper proposes a Recurrent Learned Video Compression (RLVC) approach with the Recurrent Auto-Encoder (RAE) and Recurrent Probability Model ( RPM)
The RAE employs recurrent cells in both the encoder and decoder to exploit the temporal correlation among video frames.
Our approach achieves the state-of-the-art learned video compression performance in terms of both PSNR and MS-SSIM.
arXiv Detail & Related papers (2020-06-24T08:46:33Z) - Denoising Diffusion Probabilistic Models [91.94962645056896]
We present high quality image synthesis results using diffusion probabilistic models.
Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics.
arXiv Detail & Related papers (2020-06-19T17:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.