End-to-End Learning for Video Frame Compression with Self-Attention
- URL: http://arxiv.org/abs/2004.09226v1
- Date: Mon, 20 Apr 2020 12:11:08 GMT
- Title: End-to-End Learning for Video Frame Compression with Self-Attention
- Authors: Nannan Zou, Honglei Zhang, Francesco Cricri, Hamed R. Tavakoli, Jani
Lainema, Emre Aksu, Miska Hannuksela, Esa Rahtu
- Abstract summary: We propose an end-to-end learned system for compressing video frames.
Our system learns deep embeddings of frames and encodes their difference in latent space.
In our experiments, we show that the proposed system achieves high compression rates and high objective visual quality.
- Score: 25.23586503813838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the core components of conventional (i.e., non-learned) video codecs
consists of predicting a frame from a previously-decoded frame, by leveraging
temporal correlations. In this paper, we propose an end-to-end learned system
for compressing video frames. Instead of relying on pixel-space motion (as with
optical flow), our system learns deep embeddings of frames and encodes their
difference in latent space. At decoder-side, an attention mechanism is designed
to attend to the latent space of frames to decide how different parts of the
previous and current frame are combined to form the final predicted current
frame. Spatially-varying channel allocation is achieved by using importance
masks acting on the feature-channels. The model is trained to reduce the
bitrate by minimizing a loss on importance maps and a loss on the probability
output by a context model for arithmetic coding. In our experiments, we show
that the proposed system achieves high compression rates and high objective
visual quality as measured by MS-SSIM and PSNR. Furthermore, we provide
ablation studies where we highlight the contribution of different components.
Related papers
- IBVC: Interpolation-driven B-frame Video Compression [68.18440522300536]
B-frame video compression aims to adopt bi-directional motion estimation and motion compensation (MEMC) coding for middle frame reconstruction.
Previous learned approaches often directly extend neural P-frame codecs to B-frame relying on bi-directional optical-flow estimation.
We propose a simple yet effective structure called Interpolation-B-frame Video Compression (IBVC) to address these issues.
arXiv Detail & Related papers (2023-09-25T02:45:51Z) - Predictive Coding For Animation-Based Video Compression [13.161311799049978]
We propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame.
Our experiments indicate a significant gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC.
arXiv Detail & Related papers (2023-07-09T14:40:54Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - You Can Ground Earlier than See: An Effective and Efficient Pipeline for
Temporal Sentence Grounding in Compressed Videos [56.676761067861236]
Given an untrimmed video, temporal sentence grounding aims to locate a target moment semantically according to a sentence query.
Previous respectable works have made decent success, but they only focus on high-level visual features extracted from decoded frames.
We propose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input.
arXiv Detail & Related papers (2023-03-14T12:53:27Z) - FFNeRV: Flow-Guided Frame-Wise Neural Representations for Videos [5.958701846880935]
We propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos.
With model compression techniques, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
arXiv Detail & Related papers (2022-12-23T12:51:42Z) - Inter-Frame Compression for Dynamic Point Cloud Geometry Coding [9.15965133212928]
We propose a lossy compression scheme that predicts the latent representation of the current frame using the previous frame.
Our method achieves more than 91% BD-Rate Bjontegaard Delta Rate and more than 62% BD-Rate reduction against V-PCC intra-frame encoding mode.
arXiv Detail & Related papers (2022-07-25T22:17:19Z) - Learned Video Compression via Heterogeneous Deformable Compensation
Network [78.72508633457392]
We propose a learned video compression framework via heterogeneous deformable compensation strategy (HDCVC) to tackle the problems of unstable compression performance.
More specifically, the proposed algorithm extracts features from the two adjacent frames to estimate content-Neighborhood heterogeneous deformable (HetDeform) kernel offsets.
Experimental results indicate that HDCVC achieves superior performance than the recent state-of-the-art learned video compression approaches.
arXiv Detail & Related papers (2022-07-11T02:31:31Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.