Learning to Compress Videos without Computing Motion
- URL: http://arxiv.org/abs/2009.14110v3
- Date: Sun, 27 Mar 2022 03:52:39 GMT
- Title: Learning to Compress Videos without Computing Motion
- Authors: Meixu Chen, Todd Goodall, Anjul Patney, and Alan C. Bovik
- Abstract summary: We propose a new deep learning video compression architecture that does not require motion estimation.
Our framework exploits the regularities inherent to video motion, which we capture by using displaced frame differences as video representations.
Our experiments show that our compression model, which we call the MOtionless VIdeo Codec (MOVI-Codec), learns how to efficiently compress videos without computing motion.
- Score: 39.46212197928986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of higher resolution contents and displays, its
significant volume poses significant challenges to the goals of acquiring,
transmitting, compressing, and displaying high-quality video content. In this
paper, we propose a new deep learning video compression architecture that does
not require motion estimation, which is the most expensive element of modern
hybrid video compression codecs like H.264 and HEVC. Our framework exploits the
regularities inherent to video motion, which we capture by using displaced
frame differences as video representations to train the neural network. In
addition, we propose a new space-time reconstruction network based on both an
LSTM model and a UNet model, which we call LSTM-UNet. The new video compression
framework has three components: a Displacement Calculation Unit (DCU), a
Displacement Compression Network (DCN), and a Frame Reconstruction Network
(FRN). The DCU removes the need for motion estimation found in hybrid codecs
and is less expensive. In the DCN, an RNN-based network is utilized to compress
displaced frame differences as well as retain temporal information between
frames. The LSTM-UNet is used in the FRN to learn space-time differential
representations of videos. Our experimental results show that our compression
model, which we call the MOtionless VIdeo Codec (MOVI-Codec), learns how to
efficiently compress videos without computing motion. Our experiments show that
MOVI-Codec outperforms the Low-Delay P veryfast setting of the video coding
standard H.264 and exceeds the performance of the modern global standard HEVC
codec, using the same setting, as measured by MS-SSIM, especially on higher
resolution videos. In addition, our network outperforms the latest H.266 (VVC)
codec at higher bitrates, when assessed using MS-SSIM, on high-resolution
videos.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.