Omniscient Video Super-Resolution
- URL: http://arxiv.org/abs/2103.15683v1
- Date: Mon, 29 Mar 2021 15:09:53 GMT
- Title: Omniscient Video Super-Resolution
- Authors: Peng Yi and Zhongyuan Wang and Kui Jiang and Junjun Jiang and Tao Lu
and Xin Tian and Jiayi Ma
- Abstract summary: In this paper, we propose an omniscient framework to not only utilize the preceding SR output, but also leverage the SR outputs from the present and future.
Our method is superior to the state-of-the-art methods in objective metrics, subjective visual effects and complexity.
- Score: 84.46939510200461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most recent video super-resolution (SR) methods either adopt an iterative
manner to deal with low-resolution (LR) frames from a temporally sliding
window, or leverage the previously estimated SR output to help reconstruct the
current frame recurrently. A few studies try to combine these two structures to
form a hybrid framework but have failed to give full play to it. In this paper,
we propose an omniscient framework to not only utilize the preceding SR output,
but also leverage the SR outputs from the present and future. The omniscient
framework is more generic because the iterative, recurrent and hybrid
frameworks can be regarded as its special cases. The proposed omniscient
framework enables a generator to behave better than its counterparts under
other frameworks. Abundant experiments on public datasets show that our method
is superior to the state-of-the-art methods in objective metrics, subjective
visual effects and complexity. Our code will be made public.
Related papers
- RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - The Best of Both Worlds: a Framework for Combining Degradation
Prediction with High Performance Super-Resolution Networks [14.804000317612305]
We present a framework for combining blind SR prediction mechanism with any deep SR network.
We show that our hybrid models consistently achieve stronger SR performance than both their non-blind and blind counterparts.
arXiv Detail & Related papers (2022-11-09T16:49:35Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - Context-Aware Video Reconstruction for Rolling Shutter Cameras [52.28710992548282]
In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
arXiv Detail & Related papers (2022-05-25T17:05:47Z) - Temporal Abstraction in Reinforcement Learning with the Successor
Representation [65.69658154078007]
We argue that the successor representation (SR) can be seen as a natural substrate for the discovery and use of temporal abstractions.
We show how the SR can be used to discover options that facilitate either temporally-extended exploration or planning.
arXiv Detail & Related papers (2021-10-12T05:07:43Z) - RefSum: Refactoring Neural Summarization [16.148781118509255]
We present a new framework Refactor that provides a unified view of text summarization and summaries combination.
Our system can be directly used by other researchers as an off-the-shelf tool to achieve further performance improvements.
arXiv Detail & Related papers (2021-04-15T02:58:41Z) - DynaVSR: Dynamic Adaptive Blind Video Super-Resolution [60.154204107453914]
DynaVSR is a novel meta-learning-based framework for real-world video SR.
We train a multi-frame downscaling module with various types of synthetic blur kernels, which is seamlessly combined with a video SR network for input-aware adaptation.
Experimental results show that DynaVSR consistently improves the performance of the state-of-the-art video SR models by a large margin.
arXiv Detail & Related papers (2020-11-09T15:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.