VJT: A Video Transformer on Joint Tasks of Deblurring, Low-light
Enhancement and Denoising
- URL: http://arxiv.org/abs/2401.14754v1
- Date: Fri, 26 Jan 2024 10:27:56 GMT
- Title: VJT: A Video Transformer on Joint Tasks of Deblurring, Low-light
Enhancement and Denoising
- Authors: Yuxiang Hui, Yang Liu, Yaofang Liu, Fan Jia, Jinshan Pan, Raymond
Chan, Tieyong Zeng
- Abstract summary: Video restoration task aims to recover high-quality videos from low-quality observations.
Video often faces different types of degradation, such as blur, low light, and noise.
We propose an efficient end-to-end video transformer approach for the joint task of video deblurring, low-light enhancement, and denoising.
- Score: 45.349350685858276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video restoration task aims to recover high-quality videos from low-quality
observations. This contains various important sub-tasks, such as video
denoising, deblurring and low-light enhancement, since video often faces
different types of degradation, such as blur, low light, and noise. Even worse,
these kinds of degradation could happen simultaneously when taking videos in
extreme environments. This poses significant challenges if one wants to remove
these artifacts at the same time. In this paper, to the best of our knowledge,
we are the first to propose an efficient end-to-end video transformer approach
for the joint task of video deblurring, low-light enhancement, and denoising.
This work builds a novel multi-tier transformer where each tier uses a
different level of degraded video as a target to learn the features of video
effectively. Moreover, we carefully design a new tier-to-tier feature fusion
scheme to learn video features incrementally and accelerate the training
process with a suitable adaptive weighting scheme. We also provide a new
Multiscene-Lowlight-Blur-Noise (MLBN) dataset, which is generated according to
the characteristics of the joint task based on the RealBlur dataset and YouTube
videos to simulate realistic scenes as far as possible. We have conducted
extensive experiments, compared with many previous state-of-the-art methods, to
show the effectiveness of our approach clearly.
Related papers
- Efficient Video Face Enhancement with Enhanced Spatial-Temporal Consistency [36.939731355462264]
This study proposes a novel and efficient blind video face enhancement method.
It restores high-quality videos from their compressed low-quality versions with an effective de-flickering mechanism.
Experiments conducted on the VFHQ-Test dataset demonstrate that our method surpasses the current state-of-the-art blind face video restoration and de-flickering methods on both efficiency and effectiveness.
arXiv Detail & Related papers (2024-11-25T15:14:36Z) - Learning Truncated Causal History Model for Video Restoration [14.381907888022615]
TURTLE learns the truncated causal history model for efficient and high-performing video restoration.
We report new state-of-the-art results on a multitude of video restoration benchmark tasks.
arXiv Detail & Related papers (2024-10-04T21:31:02Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - SF-V: Single Forward Video Generation Model [57.292575082410785]
We propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained models.
Experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead.
arXiv Detail & Related papers (2024-06-06T17:58:27Z) - Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style
Transfer [13.098901971644656]
This paper proposes a zero-shot video stylization method named Style-A-Video.
Uses a generative pre-trained transformer with an image latent diffusion model to achieve a concise text-controlled video stylization.
Tests show that we can attain superior content preservation and stylistic performance while incurring less consumption than previous solutions.
arXiv Detail & Related papers (2023-05-09T14:03:27Z) - Deep Video Prior for Video Consistency and Propagation [58.250209011891904]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly instead of a large dataset.
We show that temporal consistency can be achieved by training a convolutional neural network on a video with Deep Video Prior.
arXiv Detail & Related papers (2022-01-27T16:38:52Z) - Encoding in the Dark Grand Challenge: An Overview [60.9261003831389]
We propose a Grand Challenge on encoding low-light video sequences.
VVC achieves a high performance compared to simply denoising the video source prior to encoding.
The quality of the video streams can be further improved by employing a post-processing image enhancement method.
arXiv Detail & Related papers (2020-05-07T08:22:56Z) - Non-Adversarial Video Synthesis with Learned Priors [53.26777815740381]
We focus on the problem of generating videos from latent noise vectors, without any reference input frames.
We develop a novel approach that jointly optimize the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning.
Our approach generates superior quality videos compared to the existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-21T02:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.