Deep Video Super-Resolution using HR Optical Flow Estimation
- URL: http://arxiv.org/abs/2001.02129v1
- Date: Mon, 6 Jan 2020 07:25:24 GMT
- Title: Deep Video Super-Resolution using HR Optical Flow Estimation
- Authors: Longguang Wang, Yulan Guo, Li Liu, Zaiping Lin, Xinpu Deng and Wei An
- Abstract summary: Video super-resolution (SR) aims at generating a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts.
Existing deep learning based methods commonly estimate optical flows between LR frames to provide temporal dependency.
We propose an end-to-end video SR network to super-resolve both optical flows and images.
- Score: 42.86066957681113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video super-resolution (SR) aims at generating a sequence of high-resolution
(HR) frames with plausible and temporally consistent details from their
low-resolution (LR) counterparts. The key challenge for video SR lies in the
effective exploitation of temporal dependency between consecutive frames.
Existing deep learning based methods commonly estimate optical flows between LR
frames to provide temporal dependency. However, the resolution conflict between
LR optical flows and HR outputs hinders the recovery of fine details. In this
paper, we propose an end-to-end video SR network to super-resolve both optical
flows and images. Optical flow SR from LR frames provides accurate temporal
dependency and ultimately improves video SR performance. Specifically, we first
propose an optical flow reconstruction network (OFRnet) to infer HR optical
flows in a coarse-to-fine manner. Then, motion compensation is performed using
HR optical flows to encode temporal dependency. Finally, compensated LR inputs
are fed to a super-resolution network (SRnet) to generate SR results. Extensive
experiments have been conducted to demonstrate the effectiveness of HR optical
flows for SR performance improvement. Comparative results on the Vid4 and
DAVIS-10 datasets show that our network achieves the state-of-the-art
performance.
Related papers
- Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution [15.197746480157651]
We propose an effective real-world VSR algorithm by leveraging the strength of pre-trained latent diffusion models.
We exploit the temporal dynamics in LR videos to guide the diffusion process by optimizing the latent sampling path with a motion-guided loss.
The proposed motion-guided latent diffusion based VSR algorithm achieves significantly better perceptual quality than state-of-the-arts on real-world VSR benchmark datasets.
arXiv Detail & Related papers (2023-12-01T14:40:07Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal
Learning Meets Static Image Understanding [87.5799910153545]
Single-image super-resolution (SR) and multi-frame SR are two ways to super resolve low-resolution images.
Blind Motion Deblurring Super-Reslution Networks is proposed to learn dynamic-temporal information from single static motion-blurred images.
arXiv Detail & Related papers (2021-05-27T11:52:45Z) - DynaVSR: Dynamic Adaptive Blind Video Super-Resolution [60.154204107453914]
DynaVSR is a novel meta-learning-based framework for real-world video SR.
We train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation.
Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin.
arXiv Detail & Related papers (2020-11-09T15:07:32Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.