Learning to Compress Videos without Computing Motion
- URL: http://arxiv.org/abs/2009.14110v3
- Date: Sun, 27 Mar 2022 03:52:39 GMT
- Title: Learning to Compress Videos without Computing Motion
- Authors: Meixu Chen, Todd Goodall, Anjul Patney, and Alan C. Bovik
- Abstract summary: We propose a new deep learning video compression architecture that does not require motion estimation.
Our framework exploits the regularities inherent to video motion, which we capture by using displaced frame differences as video representations.
Our experiments show that our compression model, which we call the MOtionless VIdeo Codec (MOVI-Codec), learns how to efficiently compress videos without computing motion.
- Score: 39.46212197928986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of higher resolution contents and displays, its
significant volume poses significant challenges to the goals of acquiring,
transmitting, compressing, and displaying high-quality video content. In this
paper, we propose a new deep learning video compression architecture that does
not require motion estimation, which is the most expensive element of modern
hybrid video compression codecs like H.264 and HEVC. Our framework exploits the
regularities inherent to video motion, which we capture by using displaced
frame differences as video representations to train the neural network. In
addition, we propose a new space-time reconstruction network based on both an
LSTM model and a UNet model, which we call LSTM-UNet. The new video compression
framework has three components: a Displacement Calculation Unit (DCU), a
Displacement Compression Network (DCN), and a Frame Reconstruction Network
(FRN). The DCU removes the need for motion estimation found in hybrid codecs
and is less expensive. In the DCN, an RNN-based network is utilized to compress
displaced frame differences as well as retain temporal information between
frames. The LSTM-UNet is used in the FRN to learn space-time differential
representations of videos. Our experimental results show that our compression
model, which we call the MOtionless VIdeo Codec (MOVI-Codec), learns how to
efficiently compress videos without computing motion. Our experiments show that
MOVI-Codec outperforms the Low-Delay P veryfast setting of the video coding
standard H.264 and exceeds the performance of the modern global standard HEVC
codec, using the same setting, as measured by MS-SSIM, especially on higher
resolution videos. In addition, our network outperforms the latest H.266 (VVC)
codec at higher bitrates, when assessed using MS-SSIM, on high-resolution
videos.
Related papers
- When Video Coding Meets Multimodal Large Language Models: A Unified Paradigm for Video Coding [112.44822009714461]
Cross-Modality Video Coding (CMVC) is a pioneering approach to explore multimodality representation and video generative models in video coding.
During decoding, previously encoded components and video generation models are leveraged to create multiple encoding-decoding modes.
Experiments indicate that TT2V achieves effective semantic reconstruction, while IT2V exhibits competitive perceptual consistency.
arXiv Detail & Related papers (2024-08-15T11:36:18Z) - NU-Class Net: A Novel Approach for Video Quality Enhancement [1.7763979745248648]
This paper introduces NU-Class Net, an innovative deep-learning model designed to mitigate compression artifacts stemming from lossy compression codecs.
By employing the NU-Class Net, the video encoder within the video-capturing node can reduce output quality, thereby generating low-bit-rate videos.
Experimental results affirm the efficacy of the proposed model in enhancing the perceptible quality of videos, especially those streamed at low bit rates.
arXiv Detail & Related papers (2024-01-02T11:46:42Z) - Video Compression with Arbitrary Rescaling Network [8.489428003916622]
We propose a rate-guided arbitrary rescaling network (RARN) for video resizing before encoding.
The lightweight RARN structure can process FHD (1080p) content at real-time speed (91 FPS) and obtain a considerable rate reduction.
arXiv Detail & Related papers (2023-06-07T07:15:18Z) - Sandwiched Video Compression: Efficiently Extending the Reach of
Standard Codecs with Neural Wrappers [11.968545394054816]
We propose a video compression system that wraps neural networks around a standard video.
Networks are trained jointly to optimize a rate-distortion loss function.
We observe 30% improvements in rate at the same quality over HEVC.
arXiv Detail & Related papers (2023-03-20T22:03:44Z) - A Codec Information Assisted Framework for Efficient Compressed Video
Super-Resolution [15.690562510147766]
Video Super-Resolution (VSR) using recurrent neural network architecture is a promising solution due to its efficient modeling of long-range temporal dependencies.
We propose a Codec Information Assisted Framework (CIAF) to boost and accelerate recurrent VSR models for compressed videos.
arXiv Detail & Related papers (2022-10-15T08:48:29Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Learning for Video Compression with Recurrent Auto-Encoder and Recurrent
Probability Model [164.7489982837475]
This paper proposes a Recurrent Learned Video Compression (RLVC) approach with the Recurrent Auto-Encoder (RAE) and Recurrent Probability Model ( RPM)
The RAE employs recurrent cells in both the encoder and decoder to exploit the temporal correlation among video frames.
Our approach achieves the state-of-the-art learned video compression performance in terms of both PSNR and MS-SSIM.
arXiv Detail & Related papers (2020-06-24T08:46:33Z) - Variable Rate Video Compression using a Hybrid Recurrent Convolutional
Learning Framework [1.9290392443571382]
This paper presents PredEncoder, a hybrid video compression framework based on the concept of predictive auto-encoding.
A variable-rate block encoding scheme has been proposed in the paper that leads to remarkably high quality to bit-rate ratios.
arXiv Detail & Related papers (2020-04-08T20:49:25Z) - Learning for Video Compression with Hierarchical Quality and Recurrent
Enhancement [164.7489982837475]
We propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network.
In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides.
arXiv Detail & Related papers (2020-03-04T09:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.