Video Super-Resolution with Recurrent Structure-Detail Network
- URL: http://arxiv.org/abs/2008.00455v1
- Date: Sun, 2 Aug 2020 11:01:19 GMT
- Title: Video Super-Resolution with Recurrent Structure-Detail Network
- Authors: Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, Qi
Tian
- Abstract summary: Most video super-resolution methods super-resolve a single reference frame with the help of neighboring frames in a temporal sliding window.
We propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame.
- Score: 120.1149614834813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most video super-resolution methods super-resolve a single reference frame
with the help of neighboring frames in a temporal sliding window. They are less
efficient compared to the recurrent-based methods. In this work, we propose a
novel recurrent video super-resolution method which is both effective and
efficient in exploiting previous frames to super-resolve the current frame. It
divides the input into structure and detail components which are fed to a
recurrent unit composed of several proposed two-stream structure-detail blocks.
In addition, a hidden state adaptation module that allows the current frame to
selectively use information from hidden state is introduced to enhance its
robustness to appearance change and error accumulation. Extensive ablation
study validate the effectiveness of the proposed modules. Experiments on
several benchmark datasets demonstrate the superior performance of the proposed
method compared to state-of-the-art methods on video super-resolution.
Related papers
- Continuous Space-Time Video Super-Resolution Utilizing Long-Range
Temporal Information [48.20843501171717]
We propose a continuous ST-VSR (CSTVSR) method that can convert the given video to any frame rate and spatial resolution.
We show that the proposed algorithm has good flexibility and achieves better performance on various datasets.
arXiv Detail & Related papers (2023-02-26T08:02:39Z) - Memory-Augmented Non-Local Attention for Video Super-Resolution [61.55700315062226]
We propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones.
Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame.
In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment.
arXiv Detail & Related papers (2021-08-25T05:12:14Z) - Video Super-Resolution with Long-Term Self-Exemplars [38.81851816697984]
We propose a video super-resolution method with long-term cross-scale aggregation.
Our model also consists of a multi-reference alignment module to fuse the features derived from similar patches.
To evaluate our proposed method, we conduct extensive experiments on our collected CarCam dataset and the Open dataset.
arXiv Detail & Related papers (2021-06-24T06:07:13Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z) - Video Super-resolution with Temporal Group Attention [127.21615040695941]
We propose a novel method that can effectively incorporate temporal information in a hierarchical way.
The input sequence is divided into several groups, with each one corresponding to a kind of frame rate.
It achieves favorable performance against state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-07-21T04:54:30Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.