Deep Parametric 3D Filters for Joint Video Denoising and Illumination
Enhancement in Video Super Resolution
- URL: http://arxiv.org/abs/2207.01797v1
- Date: Tue, 5 Jul 2022 03:57:25 GMT
- Title: Deep Parametric 3D Filters for Joint Video Denoising and Illumination
Enhancement in Video Super Resolution
- Authors: Xiaogang Xu, Ruixing Wang, Chi-Wing Fu, Jiaya Jia
- Abstract summary: This paper presents a new parametric representation called Deep Parametric 3D Filters (DP3DF)
DP3DF incorporates local information to enable simultaneous denoising, illumination enhancement, and SR efficiently in a single encoder-and-decoder network.
Also, a dynamic residual frame is jointly learned with the DP3DF via a shared backbone to further boost the SR quality.
- Score: 96.89588203312451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the quality improvement brought by the recent methods, video
super-resolution (SR) is still very challenging, especially for videos that are
low-light and noisy. The current best solution is to subsequently employ best
models of video SR, denoising, and illumination enhancement, but doing so often
lowers the image quality, due to the inconsistency between the models. This
paper presents a new parametric representation called the Deep Parametric 3D
Filters (DP3DF), which incorporates local spatiotemporal information to enable
simultaneous denoising, illumination enhancement, and SR efficiently in a
single encoder-and-decoder network. Also, a dynamic residual frame is jointly
learned with the DP3DF via a shared backbone to further boost the SR quality.
We performed extensive experiments, including a large-scale user study, to show
our method's effectiveness. Our method consistently surpasses the best
state-of-the-art methods on all the challenging real datasets with top PSNR and
user ratings, yet having a very fast run time.
Related papers
- MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion [3.7270979204213446]
We present four key contributions to address the challenges of video processing.
First, we introduce the 3D Inverted Vector-Quantization Variencoenco Autocoder.
Second, we present MotionAura, a text-to-video generation framework.
Third, we propose a spectral transformer-based denoising network.
Fourth, we introduce a downstream task of Sketch Guided Videopainting.
arXiv Detail & Related papers (2024-10-10T07:07:56Z) - Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution [151.1255837803585]
We propose a novel approach, pursuing Spatial Adaptation and Temporal Coherence (SATeCo) for video super-resolution.
SATeCo pivots on learning spatial-temporal guidance from low-resolution videos to calibrate both latent-space high-resolution video denoising and pixel-space video reconstruction.
Experiments conducted on the REDS4 and Vid4 datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-25T17:59:26Z) - FastLLVE: Real-Time Low-Light Video Enhancement with Intensity-Aware
Lookup Table [21.77469059123589]
We propose an efficient pipeline named FastLLVE to maintain inter-frame brightness consistency effectively.
FastLLVE can process 1,080p videos at $mathit50+$ Frames Per Second (FPS), which is $mathit2 times$ faster than CNN-based methods in inference time.
arXiv Detail & Related papers (2023-08-13T11:54:14Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Image Super-Resolution via Iterative Refinement [53.57766722279425]
SR3 is an approach to image Super-Resolution via Repeated Refinement.
It adapts probabilistic denoising diffusion models to conditional image generation.
It exhibits strong performance on super-resolution tasks at different magnification factors.
arXiv Detail & Related papers (2021-04-15T17:50:42Z) - A Novel Unified Model for Multi-exposure Stereo Coding Based on Low Rank
Tucker-ALS and 3D-HEVC [0.6091702876917279]
We propose an efficient scheme for coding multi-exposure stereo images based on a tensor low-rank approximation scheme.
The multi-exposure fusion can be realized to generate HDR stereo output at the decoder for increased realism and binocular 3D depth cues.
The encoding with 3D-HEVC enhance the proposed scheme efficiency by exploiting intra-frame, inter-view and the inter-component redundancies in lowrank approximated representation.
arXiv Detail & Related papers (2021-04-10T10:10:14Z) - DynaVSR: Dynamic Adaptive Blind Video Super-Resolution [60.154204107453914]
DynaVSR is a novel meta-learning-based framework for real-world video SR.
We train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation.
Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin.
arXiv Detail & Related papers (2020-11-09T15:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.