Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling
- URL: http://arxiv.org/abs/2204.07114v1
- Date: Thu, 14 Apr 2022 17:07:33 GMT
- Title: Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling
- Authors: Takashi Isobe and Xu Jia and Xin Tao and Changlin Li and Ruihuang Li
and Yongjie Shi and Jing Mu and Huchuan Lu and Yu-Wing Tai
- Abstract summary: We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
- Score: 105.69197687940505
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Temporal modeling is crucial for video super-resolution. Most of the video
super-resolution methods adopt the optical flow or deformable convolution for
explicitly motion compensation. However, such temporal modeling techniques
increase the model complexity and might fail in case of occlusion or complex
motion, resulting in serious distortion and artifacts. In this paper, we
propose to explore the role of explicit temporal difference modeling in both LR
and HR space. Instead of directly feeding consecutive frames into a VSR model,
we propose to compute the temporal difference between frames and divide those
pixels into two subsets according to the level of difference. They are
separately processed with two branches of different receptive fields in order
to better extract complementary information. To further enhance the
super-resolution result, not only spatial residual features are extracted, but
the difference between consecutive frames in high-frequency domain is also
computed. It allows the model to exploit intermediate SR results in both future
and past to refine the current SR output. The difference at different time
steps could be cached such that information from further distance in time could
be propagated to the current frame for refinement. Experiments on several video
super-resolution benchmark datasets demonstrate the effectiveness of the
proposed method and its favorable performance against state-of-the-art methods.
Related papers
- Group-based Bi-Directional Recurrent Wavelet Neural Networks for Video
Super-Resolution [4.9136996406481135]
Video super-resolution (VSR) aims to estimate a high-resolution (HR) frame from a low-resolution (LR) frames.
Key challenge for VSR lies in the effective exploitation of spatial correlation in an intra-frame and temporal dependency between consecutive frames.
arXiv Detail & Related papers (2021-06-14T06:36:13Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - All at Once: Temporally Adaptive Multi-Frame Interpolation with Advanced
Motion Modeling [52.425236515695914]
State-of-the-art methods are iterative solutions interpolating one frame at the time.
This work introduces a true multi-frame interpolator.
It utilizes a pyramidal style network in the temporal domain to complete the multi-frame task in one-shot.
arXiv Detail & Related papers (2020-07-23T02:34:39Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.