Content-Adaptive Motion Rate Adaption for Learned Video Compression
- URL: http://arxiv.org/abs/2302.06293v1
- Date: Mon, 13 Feb 2023 11:51:23 GMT
- Title: Content-Adaptive Motion Rate Adaption for Learned Video Compression
- Authors: Chih-Hsuan Lin, Yi-Hsin Chen, Wen-Hsiao Peng
- Abstract summary: This paper introduces an online motion rate adaptation scheme for learned video compression.
It aims to achieve content-adaptive coding on individual test sequences to mitigate the domain gap between training and test data.
It features a patch-level bit allocation map, termed the $alpha$-map, to trade off between the bit rates for motion and inter-frame coding.
- Score: 11.574465203875342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces an online motion rate adaptation scheme for learned
video compression, with the aim of achieving content-adaptive coding on
individual test sequences to mitigate the domain gap between training and test
data. It features a patch-level bit allocation map, termed the $\alpha$-map, to
trade off between the bit rates for motion and inter-frame coding in a
spatially-adaptive manner. We optimize the $\alpha$-map through an online
back-propagation scheme at inference time. Moreover, we incorporate a
look-ahead mechanism to consider its impact on future frames. Extensive
experimental results confirm that the proposed scheme, when integrated into a
conditional learned video codec, is able to adapt motion bit rate effectively,
showing much improved rate-distortion performance particularly on test
sequences with complicated motion characteristics.
Related papers
- Differentiable Resolution Compression and Alignment for Efficient Video
Classification and Retrieval [16.497758750494537]
We propose an efficient video representation network with Differentiable Resolution Compression and Alignment mechanism.
We leverage a Differentiable Context-aware Compression Module to encode the saliency and non-saliency frame features.
We introduce a new Resolution-Align Transformer Layer to capture global temporal correlations among frame features with different resolutions.
arXiv Detail & Related papers (2023-09-15T05:31:53Z) - Multi-Scale Deformable Alignment and Content-Adaptive Inference for
Flexible-Rate Bi-Directional Video Compression [8.80688035831646]
This paper proposes an adaptive motion-compensation model for end-to-end rate-distortion optimized hierarchical bi-directional video compression.
We employ a gain unit, which enables a single model to operate at multiple rate-distortion operating points.
Experimental results demonstrate state-of-the-art rate-distortion performance exceeding those of all prior art in learned video coding.
arXiv Detail & Related papers (2023-06-28T20:32:16Z) - Boost Video Frame Interpolation via Motion Adaptation [73.42573856943923]
Video frame (VFI) is a challenging task that aims to generate intermediate frames between two consecutive frames in a video.
Existing learning-based VFI methods have achieved great success, but they still suffer from limited generalization ability.
We propose a novel optimization-based VFI method that can adapt to unseen motions at test time.
arXiv Detail & Related papers (2023-06-24T10:44:02Z) - You Can Ground Earlier than See: An Effective and Efficient Pipeline for
Temporal Sentence Grounding in Compressed Videos [56.676761067861236]
Given an untrimmed video, temporal sentence grounding aims to locate a target moment semantically according to a sentence query.
Previous respectable works have made decent success, but they only focus on high-level visual features extracted from decoded frames.
We propose a new setting, compressed-domain TSG, which directly utilizes compressed videos rather than fully-decompressed frames as the visual input.
arXiv Detail & Related papers (2023-03-14T12:53:27Z) - Learned Video Compression via Heterogeneous Deformable Compensation
Network [78.72508633457392]
We propose a learned video compression framework via heterogeneous deformable compensation strategy (HDCVC) to tackle the problems of unstable compression performance.
More specifically, the proposed algorithm extracts features from the two adjacent frames to estimate content-Neighborhood heterogeneous deformable (HetDeform) kernel offsets.
Experimental results indicate that HDCVC achieves superior performance than the recent state-of-the-art learned video compression approaches.
arXiv Detail & Related papers (2022-07-11T02:31:31Z) - Flexible-Rate Learned Hierarchical Bi-Directional Video Compression With
Motion Refinement and Frame-Level Bit Allocation [8.80688035831646]
We combine motion estimation and prediction modules and compress refined residual motion vectors for improved rate-distortion performance.
We exploit the gain unit to control bit allocation among intra-coded vs. bi-directionally coded frames.
arXiv Detail & Related papers (2022-06-27T20:18:52Z) - End-to-End Rate-Distortion Optimized Learned Hierarchical Bi-Directional
Video Compression [10.885590093103344]
Learned VC allows end-to-end rate-distortion (R-D) optimized training of nonlinear transform, motion and entropy model simultaneously.
This paper proposes a learned hierarchical bi-directional video (LHBDC) that combines the benefits of hierarchical motion-sampling and end-to-end optimization.
arXiv Detail & Related papers (2021-12-17T14:30:22Z) - Self-Supervised Learning of Perceptually Optimized Block Motion
Estimates for Video Compression [50.48504867843605]
We propose a search-free block motion estimation framework using a multi-stage convolutional neural network.
We deploy the multi-scale structural similarity (MS-SSIM) loss function to optimize the perceptual quality of the motion compensated predicted frames.
arXiv Detail & Related papers (2021-10-05T03:38:43Z) - End-to-end Neural Video Coding Using a Compound Spatiotemporal
Representation [33.54844063875569]
We propose a hybrid motion compensation (HMC) method that adaptively combines the predictions generated by two approaches.
Specifically, we generate a compoundtemporal representation (STR) through a recurrent information aggregation (RIA) module.
We further design a one-to-many decoder pipeline to generate multiple predictions from the CSTR, including vector-based resampling, adaptive kernel-based resampling, compensation mode selection maps and texture enhancements.
arXiv Detail & Related papers (2021-08-05T19:43:32Z) - EAN: Event Adaptive Network for Enhanced Action Recognition [66.81780707955852]
We propose a unified action recognition framework to investigate the dynamic nature of video content.
First, when extracting local cues, we generate the spatial-temporal kernels of dynamic-scale to adaptively fit the diverse events.
Second, to accurately aggregate these cues into a global video representation, we propose to mine the interactions only among a few selected foreground objects by a Transformer.
arXiv Detail & Related papers (2021-07-22T15:57:18Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.