Boosting Video Super Resolution with Patch-Based Temporal Redundancy
Optimization
- URL: http://arxiv.org/abs/2207.08674v1
- Date: Mon, 18 Jul 2022 15:11:18 GMT
- Title: Boosting Video Super Resolution with Patch-Based Temporal Redundancy
Optimization
- Authors: Yuhao Huang, Hang Dong, Jinshan Pan, Chao Zhu, Yu Guo, Ding Liu, Lean
Fu, Fei Wang
- Abstract summary: We discuss the influence of the temporal redundancy in the patches with stationary objects and background.
We develop two simple yet effective plug and play methods to improve the performance of existing local and non-local propagation-based VSR algorithms.
- Score: 46.833568886576074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of existing video super-resolution (VSR) algorithms stems mainly
exploiting the temporal information from the neighboring frames. However, none
of these methods have discussed the influence of the temporal redundancy in the
patches with stationary objects and background and usually use all the
information in the adjacent frames without any discrimination. In this paper,
we observe that the temporal redundancy will bring adverse effect to the
information propagation,which limits the performance of the most existing VSR
methods. Motivated by this observation, we aim to improve existing VSR
algorithms by handling the temporal redundancy patches in an optimized manner.
We develop two simple yet effective plug and play methods to improve the
performance of existing local and non-local propagation-based VSR algorithms on
widely-used public videos. For more comprehensive evaluating the robustness and
performance of existing VSR algorithms, we also collect a new dataset which
contains a variety of public videos as testing set. Extensive evaluations show
that the proposed methods can significantly improve the performance of existing
VSR methods on the collected videos from wild scenarios while maintain their
performance on existing commonly used datasets. The code is available at
https://github.com/HYHsimon/Boosted-VSR.
Related papers
- Cascaded Temporal Updating Network for Efficient Video Super-Resolution [47.63267159007611]
Key components in recurrent-based VSR networks significantly impact model efficiency.
We propose a cascaded temporal updating network (CTUN) for efficient VSR.
CTUN achieves a favorable trade-off between efficiency and performance compared to existing methods.
arXiv Detail & Related papers (2024-08-26T12:59:32Z) - Collaborative Feedback Discriminative Propagation for Video Super-Resolution [66.61201445650323]
Key success of video super-resolution (VSR) methods stems mainly from exploring spatial and temporal information.
Inaccurate alignment usually leads to aligned features with significant artifacts.
propagation modules only propagate the same timestep features forward or backward.
arXiv Detail & Related papers (2024-04-06T22:08:20Z) - Benchmark Dataset and Effective Inter-Frame Alignment for Real-World
Video Super-Resolution [65.20905703823965]
Video super-resolution (VSR) aiming to reconstruct a high-resolution (HR) video from its low-resolution (LR) counterpart has made tremendous progress in recent years.
It remains challenging to deploy existing VSR methods to real-world data with complex degradations.
EAVSR takes the proposed multi-layer adaptive spatial transform network (MultiAdaSTN) to refine the offsets provided by the pre-trained optical flow estimation network.
arXiv Detail & Related papers (2022-12-10T17:41:46Z) - Sliding Window Recurrent Network for Efficient Video Super-Resolution [0.0]
Video super-resolution (VSR) is the task of restoring high-resolution frames from a sequence of low-resolution inputs.
We propose a textitSliding Window based Recurrent Network (SWRN) which can be real-time inference while still achieving superior performance.
Our experiment on REDS dataset shows that the proposed method can be well adapted to mobile devices and produce visually pleasant results.
arXiv Detail & Related papers (2022-08-24T15:23:44Z) - Fast Online Video Super-Resolution with Deformable Attention Pyramid [172.16491820970646]
Video super-resolution (VSR) has many applications that pose strict causal, real-time, and latency constraints, including video streaming and TV.
We propose a recurrent VSR architecture based on a deformable attention pyramid (DAP)
arXiv Detail & Related papers (2022-02-03T17:49:04Z) - Real-Time Super-Resolution System of 4K-Video Based on Deep Learning [6.182364004551161]
Video-resolution (VSR) technology excels in low-quality video computation, avoiding unpleasant blur effect caused by occupation-based algorithms.
This paper explores the possibility of real-time VS system and designs an efficient generic VSR network, termed EGVSR.
Compared with TecoGAN, the most advanced VSR network at present, we achieve 84% reduction of density and 7.92x performance speedups.
arXiv Detail & Related papers (2021-07-12T10:35:05Z) - Self-Supervised Adaptation for Video Super-Resolution [7.26562478548988]
Single-image super-resolution (SISR) networks can adapt their network parameters to specific input images.
We present a new learning algorithm that allows conventional video super-resolution (VSR) networks to adapt their parameters to test video frames.
arXiv Detail & Related papers (2021-03-18T08:30:24Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.