Efficient Space-time Video Super Resolution using Low-Resolution Flow
and Mask Upsampling
- URL: http://arxiv.org/abs/2104.05778v1
- Date: Mon, 12 Apr 2021 19:11:57 GMT
- Title: Efficient Space-time Video Super Resolution using Low-Resolution Flow
and Mask Upsampling
- Authors: Saikat Dutta, Nisarg A. Shah, Anurag Mittal
- Abstract summary: This paper aims to generate High-resolution Slow-motion videos from Low Resolution and Low Frame rate videos.
A simplistic solution is the sequential running of Video Super Resolution and Video Frame models.
Our model is lightweight and performs better than current state-of-the-art models in REDS STSR validation set.
- Score: 12.856102293479486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores an efficient solution for Space-time Super-Resolution,
aiming to generate High-resolution Slow-motion videos from Low Resolution and
Low Frame rate videos. A simplistic solution is the sequential running of Video
Super Resolution and Video Frame interpolation models. However, this type of
solutions are memory inefficient, have high inference time, and could not make
the proper use of space-time relation property. To this extent, we first
interpolate in LR space using quadratic modeling. Input LR frames are
super-resolved using a state-of-the-art Video Super-Resolution method. Flowmaps
and blending mask which are used to synthesize LR interpolated frame is reused
in HR space using bilinear upsampling. This leads to a coarse estimate of HR
intermediate frame which often contains artifacts along motion boundaries. We
use a refinement network to improve the quality of HR intermediate frame via
residual learning. Our model is lightweight and performs better than current
state-of-the-art models in REDS STSR Validation set.
Related papers
- Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - RSTT: Real-time Spatial Temporal Transformer for Space-Time Video
Super-Resolution [13.089535703790425]
Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts.
We propose using a spatial-temporal transformer that naturally incorporates the spatial and temporal super resolution modules into a single model.
arXiv Detail & Related papers (2022-03-27T02:16:26Z) - Memory-Augmented Non-Local Attention for Video Super-Resolution [61.55700315062226]
We propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones.
Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame.
In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment.
arXiv Detail & Related papers (2021-08-25T05:12:14Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.