A Novel Dual Dense Connection Network for Video Super-resolution
- URL: http://arxiv.org/abs/2203.02723v1
- Date: Sat, 5 Mar 2022 12:21:29 GMT
- Title: A Novel Dual Dense Connection Network for Video Super-resolution
- Authors: Guofang Li and Yonggui Zhu
- Abstract summary: Video super-resolution (VSR) refers to the reconstruction of high-resolution (HR) video from the corresponding low-resolution (LR) video.
We propose a novel dual dense connection network that can generate high-quality super-resolution (SR) results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video super-resolution (VSR) refers to the reconstruction of high-resolution
(HR) video from the corresponding low-resolution (LR) video. Recently, VSR has
received increasing attention. In this paper, we propose a novel dual dense
connection network that can generate high-quality super-resolution (SR)
results. The input frames are creatively divided into reference frame,
pre-temporal group and post-temporal group, representing information in
different time periods. This grouping method provides accurate information of
different time periods without causing time information disorder. Meanwhile, we
produce a new loss function, which is beneficial to enhance the convergence
ability of the model. Experiments show that our model is superior to other
advanced models in Vid4 datasets and SPMCS-11 datasets.
Related papers
- Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution [151.1255837803585]
We propose a novel approach, pursuing Spatial Adaptation and Temporal Coherence (SATeCo) for video super-resolution.
SATeCo pivots on learning spatial-temporal guidance from low-resolution videos to calibrate both latent-space high-resolution video denoising and pixel-space video reconstruction.
Experiments conducted on the REDS4 and Vid4 datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-25T17:59:26Z) - RBPGAN: Recurrent Back-Projection GAN for Video Super Resolution [2.265171676600799]
We propose Recurrent Back-Projection Generative Adversarial Network (RBPGAN) for video super resolution (VSR)
RBPGAN integrates two state-of-the-art models to get the best in both worlds without compromising the accuracy of produced video.
arXiv Detail & Related papers (2023-11-15T18:15:30Z) - A Lightweight Recurrent Grouping Attention Network for Video
Super-Resolution [0.0]
We propose a lightweight lightweight grouping attention network to reduce the stress on the device.
The parameters of this model are only 0.878M, which is much lower than the current mainstream model for studying video super-resolution.
Experiments demonstrate that our model achieves state-of-the-art performance on multiple datasets.
arXiv Detail & Related papers (2023-09-25T08:21:49Z) - Sliding Window Recurrent Network for Efficient Video Super-Resolution [0.0]
Video super-resolution (VSR) is the task of restoring high-resolution frames from a sequence of low-resolution inputs.
We propose a textitSliding Window based Recurrent Network (SWRN) which can be real-time inference while still achieving superior performance.
Our experiment on REDS dataset shows that the proposed method can be well adapted to mobile devices and produce visually pleasant results.
arXiv Detail & Related papers (2022-08-24T15:23:44Z) - STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction [78.129039340528]
We propose a Stemporal Information-Preserving and Perception-Augmented Model (STIP) to solve the above two problems.
The proposed model aims to preserve thetemporal information for videos during the feature extraction and the state transitions.
Experimental results show that the proposed STIP can predict videos with more satisfactory visual quality compared with a variety of state-of-the-art methods.
arXiv Detail & Related papers (2022-06-09T09:49:04Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - STRPM: A Spatiotemporal Residual Predictive Model for High-Resolution
Video Prediction [78.129039340528]
We propose a StemporalResidual Predictive Model (STRPM) for high-resolution video prediction.
STRPM can generate more satisfactory results compared with various existing methods.
Experimental results show that STRPM can generate more satisfactory results compared with various existing methods.
arXiv Detail & Related papers (2022-03-30T06:24:00Z) - Revisiting Temporal Modeling for Video Super-resolution [47.90584361677039]
We study and compare three temporal modeling methods (2D CNN with early fusion, 3D CNN with slow fusion and Recurrent Neural Network) for video super-resolution.
We also propose a novel Recurrent Residual Network (RRN) for efficient video super-resolution, where residual learning is utilized to stabilize the training of RNN.
arXiv Detail & Related papers (2020-08-13T09:09:37Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.