Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal
Learning Meets Static Image Understanding
- URL: http://arxiv.org/abs/2105.13077v1
- Date: Thu, 27 May 2021 11:52:45 GMT
- Title: Blind Motion Deblurring Super-Resolution: When Dynamic Spatio-Temporal
Learning Meets Static Image Understanding
- Authors: Wenjia Niu, Kaihao Zhang, Wenhan Luo, Yiran Zhong, Xin Yu, Hongdong Li
- Abstract summary: Single-image super-resolution (SR) and multi-frame SR are two ways to super resolve low-resolution images.
Blind Motion Deblurring Super-Reslution Networks is proposed to learn dynamic-temporal information from single static motion-blurred images.
- Score: 87.5799910153545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-image super-resolution (SR) and multi-frame SR are two ways to super
resolve low-resolution images. Single-Image SR generally handles each image
independently, but ignores the temporal information implied in continuing
frames. Multi-frame SR is able to model the temporal dependency via capturing
motion information. However, it relies on neighbouring frames which are not
always available in the real world. Meanwhile, slight camera shake easily
causes heavy motion blur on long-distance-shot low-resolution images. To
address these problems, a Blind Motion Deblurring Super-Reslution Networks,
BMDSRNet, is proposed to learn dynamic spatio-temporal information from single
static motion-blurred images. Motion-blurred images are the accumulation over
time during the exposure of cameras, while the proposed BMDSRNet learns the
reverse process and uses three-streams to learn Bidirectional spatio-temporal
information based on well designed reconstruction loss functions to recover
clean high-resolution images. Extensive experiments demonstrate that the
proposed BMDSRNet outperforms recent state-of-the-art methods, and has the
ability to simultaneously deal with image deblurring and SR.
Related papers
- TV-based Deep 3D Self Super-Resolution for fMRI [41.08227909855111]
We introduce a novel self-supervised DL SR model that combines a DL network with an analytical approach and Total Variation (TV) regularization.
Our method eliminates the need for external GT images, achieving competitive performance compared to supervised DL techniques and preserving the functional maps.
arXiv Detail & Related papers (2024-10-05T09:35:15Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - Rolling Shutter Inversion: Bring Rolling Shutter Images to High
Framerate Global Shutter Video [111.08121952640766]
This paper presents a novel deep-learning based solution to the RS temporal super-resolution problem.
By leveraging the multi-view geometry relationship of the RS imaging process, our framework successfully achieves high framerate GS generation.
Our method can produce high-quality GS image sequences with rich details, outperforming the state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T16:47:12Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z) - Deep Video Super-Resolution using HR Optical Flow Estimation [42.86066957681113]
Video super-resolution (SR) aims at generating a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts.
Existing deep learning based methods commonly estimate optical flows between LR frames to provide temporal dependency.
We propose an end-to-end video SR network to super-resolve both optical flows and images.
arXiv Detail & Related papers (2020-01-06T07:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.