Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution
- URL: http://arxiv.org/abs/2104.07473v1
- Date: Thu, 15 Apr 2021 17:59:23 GMT
- Title: Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution
- Authors: Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach,
Chenliang Xu
- Abstract summary: Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
- Score: 100.11355888909102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the space-time video super-resolution, which aims
at generating a high-resolution (HR) slow-motion video from a low-resolution
(LR) and low frame rate (LFR) video sequence. A na\"ive method is to decompose
it into two sub-tasks: video frame interpolation (VFI) and video
super-resolution (VSR). Nevertheless, temporal interpolation and spatial
upscaling are intra-related in this problem. Two-stage approaches cannot fully
make use of this natural property. Besides, state-of-the-art VFI or VSR deep
networks usually have a large frame reconstruction module in order to obtain
high-quality photo-realistic video frames, which makes the two-stage approaches
have large models and thus be relatively time-consuming. To overcome the
issues, we present a one-stage space-time video super-resolution framework,
which can directly reconstruct an HR slow-motion video sequence from an input
LR and LFR video. Instead of reconstructing missing LR intermediate frames as
VFI models do, we temporally interpolate LR frame features of the missing LR
frames capturing local temporal contexts by a feature temporal interpolation
module. Extensive experiments on widely used benchmarks demonstrate that the
proposed framework not only achieves better qualitative and quantitative
performance on both clean and noisy LR frames but also is several times faster
than recent state-of-the-art two-stage networks. The source code is released in
https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020 .
Related papers
- Continuous Space-Time Video Super-Resolution Utilizing Long-Range
Temporal Information [48.20843501171717]
We propose a continuous ST-VSR (CSTVSR) method that can convert the given video to any frame rate and spatial resolution.
We show that the proposed algorithm has good flexibility and achieves better performance on various datasets.
arXiv Detail & Related papers (2023-02-26T08:02:39Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - RSTT: Real-time Spatial Temporal Transformer for Space-Time Video
Super-Resolution [13.089535703790425]
Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts.
We propose using a spatial-temporal transformer that naturally incorporates the spatial and temporal super resolution modules into a single model.
arXiv Detail & Related papers (2022-03-27T02:16:26Z) - STDAN: Deformable Attention Network for Space-Time Video
Super-Resolution [39.18399652834573]
We propose a deformable attention network called STDAN for STVSR.
First, we devise a long-short term feature (LSTFI) module, which is capable of abundant content from more neighboring input frames.
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts are adaptively captured and aggregated.
arXiv Detail & Related papers (2022-03-14T03:40:35Z) - Efficient Space-time Video Super Resolution using Low-Resolution Flow
and Mask Upsampling [12.856102293479486]
This paper aims to generate High-resolution Slow-motion videos from Low Resolution and Low Frame rate videos.
A simplistic solution is the sequential running of Video Super Resolution and Video Frame models.
Our model is lightweight and performs better than current state-of-the-art models in REDS STSR validation set.
arXiv Detail & Related papers (2021-04-12T19:11:57Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.