DynaVSR: Dynamic Adaptive Blind Video Super-Resolution
- URL: http://arxiv.org/abs/2011.04482v1
- Date: Mon, 9 Nov 2020 15:07:32 GMT
- Title: DynaVSR: Dynamic Adaptive Blind Video Super-Resolution
- Authors: Suyoung Lee, Myungsub Choi, Kyoung Mu Lee
- Abstract summary: DynaVSR is a novel meta-learning-based framework for real-world video SR.
We train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation.
Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin.
- Score: 60.154204107453914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most conventional supervised super-resolution (SR) algorithms assume that
low-resolution (LR) data is obtained by downscaling high-resolution (HR) data
with a fixed known kernel, but such an assumption often does not hold in real
scenarios. Some recent blind SR algorithms have been proposed to estimate
different downscaling kernels for each input LR image. However, they suffer
from heavy computational overhead, making them infeasible for direct
application to videos. In this work, we present DynaVSR, a novel
meta-learning-based framework for real-world video SR that enables efficient
downscaling model estimation and adaptation to the current input. Specifically,
we train a multi-frame downscaling module with various types of synthetic blur
kernels, which is seamlessly combined with a video SR network for input-aware
adaptation. Experimental results show that DynaVSR consistently improves the
performance of the state-of-the-art video SR models by a large margin, with an
order of magnitude faster inference time compared to the existing blind SR
approaches.
Related papers
- Enhanced Super-Resolution Training via Mimicked Alignment for Real-World Scenes [51.92255321684027]
We propose a novel plug-and-play module designed to mitigate misalignment issues by aligning LR inputs with HR images during training.
Specifically, our approach involves mimicking a novel LR sample that aligns with HR while preserving the characteristics of the original LR samples.
We comprehensively evaluate our method on synthetic and real-world datasets, demonstrating its effectiveness across a spectrum of SR models.
arXiv Detail & Related papers (2024-10-07T18:18:54Z) - S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and
Blind Super-Resolution [5.617008573997855]
A light-weight transformer-based SR model (S2R transformer) and a novel coarse-to-fine training strategy are proposed.
The proposed S2R outperforms other single-image SR models in ideal SR condition with only 578K parameters.
It can achieve better visual results than regular blind SR models in blind fuzzy conditions with only 10 gradient updates.
arXiv Detail & Related papers (2023-08-16T04:27:44Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Benchmark Dataset and Effective Inter-Frame Alignment for Real-World
Video Super-Resolution [65.20905703823965]
Video super-resolution (VSR) aiming to reconstruct a high-resolution (HR) video from its low-resolution (LR) counterpart has made tremendous progress in recent years.
It remains challenging to deploy existing VSR methods to real-world data with complex degradations.
EAVSR takes the proposed multi-layer adaptive spatial transform network (MultiAdaSTN) to refine the offsets provided by the pre-trained optical flow estimation network.
arXiv Detail & Related papers (2022-12-10T17:41:46Z) - Blind Super-Resolution for Remote Sensing Images via Conditional
Stochastic Normalizing Flows [14.882417028542855]
We propose a novel blind SR framework based on the normalizing flow (BlindSRSNF) to address the above problems.
BlindSRSNF learns the conditional probability distribution over the high-resolution image space given a low-resolution (LR) image by explicitly optimizing the variational bound on the likelihood.
We show that the proposed algorithm can obtain SR results with excellent visual perception quality on both simulated LR and real-world RSIs.
arXiv Detail & Related papers (2022-10-14T12:37:32Z) - Self-Supervised Deep Blind Video Super-Resolution [46.410705294831374]
We propose a self-supervised learning method to solve the blind video SR problem.
We generate auxiliary paired data from original LR videos according to the image formation of video SR.
Experiments show that our method performs favorably against state-of-the-art ones on benchmarks and real-world videos.
arXiv Detail & Related papers (2022-01-19T05:18:44Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z) - Deep Video Super-Resolution using HR Optical Flow Estimation [42.86066957681113]
Video super-resolution (SR) aims at generating a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts.
Existing deep learning based methods commonly estimate optical flows between LR frames to provide temporal dependency.
We propose an end-to-end video SR network to super-resolve both optical flows and images.
arXiv Detail & Related papers (2020-01-06T07:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.