A Lightweight Recurrent Grouping Attention Network for Video
Super-Resolution
- URL: http://arxiv.org/abs/2309.13940v1
- Date: Mon, 25 Sep 2023 08:21:49 GMT
- Title: A Lightweight Recurrent Grouping Attention Network for Video
Super-Resolution
- Authors: Yonggui Zhu, Guofang Li
- Abstract summary: We propose a lightweight lightweight grouping attention network to reduce the stress on the device.
The parameters of this model are only 0.878M, which is much lower than the current mainstream model for studying video super-resolution.
Experiments demonstrate that our model achieves state-of-the-art performance on multiple datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective aggregation of temporal information of consecutive frames is the
core of achieving video super-resolution. Many scholars have utilized
structures such as sliding windows and recurrent to gather spatio-temporal
information of frames. However, although the performance of the constructed VSR
models is improving, the size of the models is also increasing, exacerbating
the demand on the equipment. Thus, to reduce the stress on the device, we
propose a novel lightweight recurrent grouping attention network. The
parameters of this model are only 0.878M, which is much lower than the current
mainstream model for studying video super-resolution. We design forward feature
extraction module and backward feature extraction module to collect temporal
information between consecutive frames from two directions. Moreover, a new
grouping mechanism is proposed to efficiently collect spatio-temporal
information of the reference frame and its neighboring frames. The attention
supplementation module is presented to further enhance the information
gathering range of the model. The feature reconstruction module aims to
aggregate information from different directions to reconstruct high-resolution
features. Experiments demonstrate that our model achieves state-of-the-art
performance on multiple datasets.
Related papers
- Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - STRPM: A Spatiotemporal Residual Predictive Model for High-Resolution
Video Prediction [78.129039340528]
We propose a StemporalResidual Predictive Model (STRPM) for high-resolution video prediction.
STRPM can generate more satisfactory results compared with various existing methods.
Experimental results show that STRPM can generate more satisfactory results compared with various existing methods.
arXiv Detail & Related papers (2022-03-30T06:24:00Z) - STDAN: Deformable Attention Network for Space-Time Video
Super-Resolution [39.18399652834573]
We propose a deformable attention network called STDAN for STVSR.
First, we devise a long-short term feature (LSTFI) module, which is capable of abundant content from more neighboring input frames.
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts are adaptively captured and aggregated.
arXiv Detail & Related papers (2022-03-14T03:40:35Z) - A Novel Dual Dense Connection Network for Video Super-resolution [0.0]
Video super-resolution (VSR) refers to the reconstruction of high-resolution (HR) video from the corresponding low-resolution (LR) video.
We propose a novel dual dense connection network that can generate high-quality super-resolution (SR) results.
arXiv Detail & Related papers (2022-03-05T12:21:29Z) - An Efficient Recurrent Adversarial Framework for Unsupervised Real-Time
Video Enhancement [132.60976158877608]
We propose an efficient adversarial video enhancement framework that learns directly from unpaired video examples.
In particular, our framework introduces new recurrent cells that consist of interleaved local and global modules for implicit integration of spatial and temporal information.
The proposed design allows our recurrent cells to efficiently propagate-temporal-information across frames and reduces the need for high complexity networks.
arXiv Detail & Related papers (2020-12-24T00:03:29Z) - Video Super-Resolution with Recurrent Structure-Detail Network [120.1149614834813]
Most video super-resolution methods super-resolve a single reference frame with the help of neighboring frames in a temporal sliding window.
We propose a novel recurrent video super-resolution method which is both effective and efficient in exploiting previous frames to super-resolve the current frame.
arXiv Detail & Related papers (2020-08-02T11:01:19Z) - Video Super-resolution with Temporal Group Attention [127.21615040695941]
We propose a novel method that can effectively incorporate temporal information in a hierarchical way.
The input sequence is divided into several groups, with each one corresponding to a kind of frame rate.
It achieves favorable performance against state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-07-21T04:54:30Z) - Unsupervised Video Decomposition using Spatio-temporal Iterative
Inference [31.97227651679233]
Multi-object scene decomposition is a fast-emerging problem in learning.
We show that our model has a high accuracy even without color information.
We demonstrate the decomposition, segmentation prediction capabilities of our model and show that it outperforms the state-of-the-art on several benchmark datasets.
arXiv Detail & Related papers (2020-06-25T22:57:17Z) - TAM: Temporal Adaptive Module for Video Recognition [60.83208364110288]
temporal adaptive module (bf TAM) generates video-specific temporal kernels based on its own feature map.
Experiments on Kinetics-400 and Something-Something datasets demonstrate that our TAM outperforms other temporal modeling methods consistently.
arXiv Detail & Related papers (2020-05-14T08:22:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.