Encoding in the Dark Grand Challenge: An Overview
- URL: http://arxiv.org/abs/2005.03315v1
- Date: Thu, 7 May 2020 08:22:56 GMT
- Title: Encoding in the Dark Grand Challenge: An Overview
- Authors: Nantheera Anantrasirichai, Fan Zhang, Alexandra Malyugina, Paul Hill,
and Angeliki Katsenou
- Abstract summary: We propose a Grand Challenge on encoding low-light video sequences.
VVC achieves a high performance compared to simply denoising the video source prior to encoding.
The quality of the video streams can be further improved by employing a post-processing image enhancement method.
- Score: 60.9261003831389
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A big part of the video content we consume from video providers consists of
genres featuring low-light aesthetics. Low light sequences have special
characteristics, such as spatio-temporal varying acquisition noise and light
flickering, that make the encoding process challenging. To deal with the
spatio-temporal incoherent noise, higher bitrates are used to achieve high
objective quality. Additionally, the quality assessment metrics and methods
have not been designed, trained or tested for this type of content. This has
inspired us to trigger research in that area and propose a Grand Challenge on
encoding low-light video sequences. In this paper, we present an overview of
the proposed challenge, and test state-of-the-art methods that will be part of
the benchmark methods at the stage of the participants' deliverable assessment.
From this exploration, our results show that VVC already achieves a high
performance compared to simply denoising the video source prior to encoding.
Moreover, the quality of the video streams can be further improved by employing
a post-processing image enhancement method.
Related papers
- Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models [47.518487213173785]
We propose a novel formulation that considers both visual quality and consistency of content.
Consistency of content is ensured by a proposed loss function that maintains the structure of the input, while visual quality is improved by utilizing the denoising process of pretrained diffusion models.
arXiv Detail & Related papers (2024-07-14T17:59:56Z) - VJT: A Video Transformer on Joint Tasks of Deblurring, Low-light
Enhancement and Denoising [45.349350685858276]
Video restoration task aims to recover high-quality videos from low-quality observations.
Video often faces different types of degradation, such as blur, low light, and noise.
We propose an efficient end-to-end video transformer approach for the joint task of video deblurring, low-light enhancement, and denoising.
arXiv Detail & Related papers (2024-01-26T10:27:56Z) - VaQuitA: Enhancing Alignment in LLM-Assisted Video Understanding [63.075626670943116]
We introduce a cutting-edge framework, VaQuitA, designed to refine the synergy between video and textual information.
At the data level, instead of sampling frames uniformly, we implement a sampling method guided by CLIP-score rankings.
At the feature level, we integrate a trainable Video Perceiver alongside a Visual-Query Transformer.
arXiv Detail & Related papers (2023-12-04T19:48:02Z) - Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos [34.152901518593396]
The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing.
However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms.
We propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos.
arXiv Detail & Related papers (2023-11-22T03:41:13Z) - Low-light Image and Video Enhancement via Selective Manipulation of
Chromaticity [1.4680035572775534]
We present a simple yet effective approach for low-light image and video enhancement.
The above adaptivity allows us to avoid the costly step of low-light image decomposition into illumination and reflectance.
Our results on standard lowlight image datasets show the efficacy of our algorithm and its qualitative and quantitative superiority over several state-of-the-art techniques.
arXiv Detail & Related papers (2022-03-09T17:01:28Z) - Low-Fidelity End-to-End Video Encoder Pre-training for Temporal Action
Localization [96.73647162960842]
TAL is a fundamental yet challenging task in video understanding.
Existing TAL methods rely on pre-training a video encoder through action classification supervision.
We introduce a novel low-fidelity end-to-end (LoFi) video encoder pre-training method.
arXiv Detail & Related papers (2021-03-28T22:18:14Z) - Coherent Loss: A Generic Framework for Stable Video Segmentation [103.78087255807482]
We investigate how a jittering artifact degrades the visual quality of video segmentation results.
We propose a Coherent Loss with a generic framework to enhance the performance of a neural network against jittering artifacts.
arXiv Detail & Related papers (2020-10-25T10:48:28Z) - Blind Video Temporal Consistency via Deep Video Prior [61.062900556483164]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly.
We show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior.
arXiv Detail & Related papers (2020-10-22T16:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.