Video Face Super-Resolution with Motion-Adaptive Feedback Cell
- URL: http://arxiv.org/abs/2002.06378v1
- Date: Sat, 15 Feb 2020 13:14:10 GMT
- Title: Video Face Super-Resolution with Motion-Adaptive Feedback Cell
- Authors: Jingwei Xin, Nannan Wang, Jie Li, Xinbo Gao, Zhifeng Li
- Abstract summary: Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
- Score: 90.73821618795512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video super-resolution (VSR) methods have recently achieved a remarkable
success due to the development of deep convolutional neural networks (CNN).
Current state-of-the-art CNN methods usually treat the VSR problem as a large
number of separate multi-frame super-resolution tasks, at which a batch of low
resolution (LR) frames is utilized to generate a single high resolution (HR)
frame, and running a slide window to select LR frames over the entire video
would obtain a series of HR frames. However, duo to the complex temporal
dependency between frames, with the number of LR input frames increase, the
performance of the reconstructed HR frames become worse. The reason is in that
these methods lack the ability to model complex temporal dependencies and hard
to give an accurate motion estimation and compensation for VSR process. Which
makes the performance degrade drastically when the motion in frames is complex.
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but
effective block, which can efficiently capture the motion compensation and feed
it back to the network in an adaptive way. Our approach efficiently utilizes
the information of the inter-frame motion, the dependence of the network on
motion estimation and compensation method can be avoid. In addition, benefiting
from the excellent nature of MAFC, the network can achieve better performance
in the case of extremely complex motion scenarios. Extensive evaluations and
comparisons validate the strengths of our approach, and the experimental
results demonstrated that the proposed framework is outperform the
state-of-the-art methods.
Related papers
- Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution [15.197746480157651]
We propose an effective real-world VSR algorithm by leveraging the strength of pre-trained latent diffusion models.
We exploit the temporal dynamics in LR videos to guide the diffusion process by optimizing the latent sampling path with a motion-guided loss.
The proposed motion-guided latent diffusion based VSR algorithm achieves significantly better perceptual quality than state-of-the-arts on real-world VSR benchmark datasets.
arXiv Detail & Related papers (2023-12-01T14:40:07Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Look Back and Forth: Video Super-Resolution with Explicit Temporal
Difference Modeling [105.69197687940505]
We propose to explore the role of explicit temporal difference modeling in both LR and HR space.
To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed.
arXiv Detail & Related papers (2022-04-14T17:07:33Z) - Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video
Super-Resolution [100.11355888909102]
Space-time video super-resolution aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence.
We present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video.
arXiv Detail & Related papers (2021-04-15T17:59:23Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video
Super-Resolution [95.26202278535543]
A simple solution is to split it into two sub-tasks: video frame (VFI) and video super-resolution (VSR)
temporalsynthesis and spatial super-resolution are intra-related in this task.
We propose a one-stage space-time video super-resolution framework, which directly synthesizes an HR slow-motion video from an LFR, LR video.
arXiv Detail & Related papers (2020-02-26T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.