Video Super-Resolution with Long-Term Self-Exemplars
- URL: http://arxiv.org/abs/2106.12778v1
- Date: Thu, 24 Jun 2021 06:07:13 GMT
- Title: Video Super-Resolution with Long-Term Self-Exemplars
- Authors: Guotao Meng, Yue Wu, Sijin Li, Qifeng Chen
- Abstract summary: We propose a video super-resolution method with long-term cross-scale aggregation.
Our model also consists of a multi-reference alignment module to fuse the features derived from similar patches.
To evaluate our proposed method, we conduct extensive experiments on our collected CarCam dataset and the Open dataset.
- Score: 38.81851816697984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing video super-resolution methods often utilize a few neighboring
frames to generate a higher-resolution image for each frame. However, the
redundant information between distant frames has not been fully exploited in
these methods: corresponding patches of the same instance appear across distant
frames at different scales. Based on this observation, we propose a video
super-resolution method with long-term cross-scale aggregation that leverages
similar patches (self-exemplars) across distant frames. Our model also consists
of a multi-reference alignment module to fuse the features derived from similar
patches: we fuse the features of distant references to perform high-quality
super-resolution. We also propose a novel and practical training strategy for
referenced-based super-resolution. To evaluate the performance of our proposed
method, we conduct extensive experiments on our collected CarCam dataset and
the Waymo Open dataset, and the results demonstrate our method outperforms
state-of-the-art methods. Our source code will be publicly available.
Related papers
- Combining Contrastive and Supervised Learning for Video Super-Resolution
Detection [0.0]
We propose a new upscaled-resolution-detection method based on learning of visual representations using contrastive and cross-entropy losses.
Our method effectively detects upscaling even in compressed videos and outperforms the state-of-the-art alternatives.
arXiv Detail & Related papers (2022-05-20T18:58:13Z) - Memory-Augmented Non-Local Attention for Video Super-Resolution [61.55700315062226]
We propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones.
Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame.
In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment.
arXiv Detail & Related papers (2021-08-25T05:12:14Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Video Super-Resolution with Recurrent Structure-Detail Network [120.1149614834813]
Most video super-resolution methods super-resolve a single reference frame with the help of neighboring frames in a temporal sliding window.
We propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame.
arXiv Detail & Related papers (2020-08-02T11:01:19Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - Video Super-resolution with Temporal Group Attention [127.21615040695941]
We propose a novel method that can effectively incorporate temporal information in a hierarchical way.
The input sequence is divided into several groups, with each one corresponding to a kind of frame rate.
It achieves favorable performance against state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-07-21T04:54:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.