Blurry Video Compression: A Trade-off between Visual Enhancement and
Data Compression
- URL: http://arxiv.org/abs/2311.04430v1
- Date: Wed, 8 Nov 2023 02:17:54 GMT
- Title: Blurry Video Compression: A Trade-off between Visual Enhancement and
Data Compression
- Authors: Dawit Mureja Argaw, Junsik Kim, In So Kweon
- Abstract summary: Existing video compression (VC) methods primarily aim to reduce the spatial and temporal redundancies between consecutive frames in a video.
Previous works have achieved remarkable results on videos acquired under specific settings such as instant (known) exposure time and shutter speed.
In this work, we tackle the VC problem in a general scenario where a given video can be blurry due to predefined camera settings or dynamics in the scene.
- Score: 65.8148169700705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing video compression (VC) methods primarily aim to reduce the spatial
and temporal redundancies between consecutive frames in a video while
preserving its quality. In this regard, previous works have achieved remarkable
results on videos acquired under specific settings such as instant (known)
exposure time and shutter speed which often result in sharp videos. However,
when these methods are evaluated on videos captured under different temporal
priors, which lead to degradations like motion blur and low frame rate, they
fail to maintain the quality of the contents. In this work, we tackle the VC
problem in a general scenario where a given video can be blurry due to
predefined camera settings or dynamics in the scene. By exploiting the natural
trade-off between visual enhancement and data compression, we formulate VC as a
min-max optimization problem and propose an effective framework and training
strategy to tackle the problem. Extensive experimental results on several
benchmark datasets confirm the effectiveness of our method compared to several
state-of-the-art VC approaches.
Related papers
- Uncertainty-Aware Deep Video Compression with Ensembles [24.245365441718654]
We propose an uncertainty-aware video compression model that can effectively capture predictive uncertainty with deep ensembles.
Our model can effectively save bits by more than 20% compared to 1080p sequences.
arXiv Detail & Related papers (2024-03-28T05:44:48Z) - Perceptual Quality Improvement in Videoconferencing using
Keyframes-based GAN [28.773037051085318]
We propose a novel GAN-based method for compression artifacts reduction in videoconferencing.
First, we extract multi-scale features from the compressed and reference frames.
Then, our architecture combines these features in a progressive manner according to facial landmarks.
arXiv Detail & Related papers (2023-11-07T16:38:23Z) - High Visual-Fidelity Learned Video Compression [6.609832462227998]
We propose a novel High Visual-Fidelity Learned Video Compression framework (HVFVC)
Specifically, we design a novel confidence-based feature reconstruction method to address the issue of poor reconstruction in newly-emerged regions.
Extensive experiments have shown that the proposed HVFVC achieves excellent perceptual quality, outperforming the latest VVC standard with only 50% required.
arXiv Detail & Related papers (2023-10-07T03:27:45Z) - Predictive Coding For Animation-Based Video Compression [13.161311799049978]
We propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame.
Our experiments indicate a significant gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC.
arXiv Detail & Related papers (2023-07-09T14:40:54Z) - Exploring Long- and Short-Range Temporal Information for Learned Video
Compression [54.91301930491466]
We focus on exploiting the unique characteristics of video content and exploring temporal information to enhance compression performance.
For long-range temporal information exploitation, we propose temporal prior that can update continuously within the group of pictures (GOP) during inference.
In that case temporal prior contains valuable temporal information of all decoded images within the current GOP.
In detail, we design a hierarchical structure to achieve multi-scale compensation.
arXiv Detail & Related papers (2022-08-07T15:57:18Z) - Leveraging Bitstream Metadata for Fast, Accurate, Generalized Compressed
Video Quality Enhancement [74.1052624663082]
We develop a deep learning architecture capable of restoring detail to compressed videos.
We show that this improves restoration accuracy compared to prior compression correction methods.
We condition our model on quantization data which is readily available in the bitstream.
arXiv Detail & Related papers (2022-01-31T18:56:04Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Blind Video Temporal Consistency via Deep Video Prior [61.062900556483164]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly.
We show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior.
arXiv Detail & Related papers (2020-10-22T16:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.