Blur More To Deblur Better: Multi-Blur2Deblur For Efficient Video
Deblurring
- URL: http://arxiv.org/abs/2012.12507v1
- Date: Wed, 23 Dec 2020 06:17:31 GMT
- Title: Blur More To Deblur Better: Multi-Blur2Deblur For Efficient Video
Deblurring
- Authors: Dongwon Park, Dong Un Kang, Se Young Chun
- Abstract summary: Multi-blur-to-deblur (MB2D) is a novel concept to exploit neighboring frames for efficient video deblurring.
We propose recurrent neural network (MBRNN) that can synthesize more blurred images from neighboring frames.
- Score: 23.874492023331698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the key components for video deblurring is how to exploit neighboring
frames. Recent state-of-the-art methods either used aligned adjacent frames to
the center frame or propagated the information on past frames to the current
frame recurrently. Here we propose multi-blur-to-deblur (MB2D), a novel concept
to exploit neighboring frames for efficient video deblurring. Firstly, inspired
by unsharp masking, we argue that using more blurred images with long exposures
as additional inputs significantly improves performance. Secondly, we propose
multi-blurring recurrent neural network (MBRNN) that can synthesize more
blurred images from neighboring frames, yielding substantially improved
performance with existing video deblurring methods. Lastly, we propose
multi-scale deblurring with connecting recurrent feature map from MBRNN (MSDR)
to achieve state-of-the-art performance on the popular GoPro and Su datasets in
fast and memory efficient ways.
Related papers
- DaBiT: Depth and Blur informed Transformer for Joint Refocusing and Super-Resolution [4.332534893042983]
In many real-world scenarios, recorded videos suffer from accidental focus blur.
This paper introduces a framework optimised for focal deblurring (refocusing) and video super-resolution (VSR)
We achieve state-of-the-art results with an average PSNR performance over 1.9dB greater than comparable existing video restoration methods.
arXiv Detail & Related papers (2024-07-01T12:22:16Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - ReBotNet: Fast Real-time Video Enhancement [59.08038313427057]
Most restoration networks are slow, have high computational bottleneck, and can't be used for real-time video enhancement.
In this work, we design an efficient and fast framework to perform real-time enhancement for practical use-cases like live video calls and video streams.
To evaluate our method, we emulate two new datasets that real-world video call and streaming scenarios, and show extensive results on multiple datasets where ReBotNet outperforms existing approaches with lower computations, reduced memory requirements, and faster inference time.
arXiv Detail & Related papers (2023-03-23T17:58:05Z) - Event-guided Multi-patch Network with Self-supervision for Non-uniform
Motion Deblurring [113.96237446327795]
We present a novel self-supervised event-guided deep hierarchical Multi-patch Network to deal with blurry images and videos.
We also propose an event-guided architecture to exploit motion cues contained in videos to tackle complex blur in videos.
Our MPN achieves the state of the art on the GoPro and VideoDeblurring datasets with a 40x faster runtime compared to current multi-scale methods.
arXiv Detail & Related papers (2023-02-14T15:58:00Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - Memory-Augmented Non-Local Attention for Video Super-Resolution [61.55700315062226]
We propose a novel video super-resolution method that aims at generating high-fidelity high-resolution (HR) videos from low-resolution (LR) ones.
Previous methods predominantly leverage temporal neighbor frames to assist the super-resolution of the current frame.
In contrast, we devise a cross-frame non-local attention mechanism that allows video super-resolution without frame alignment.
arXiv Detail & Related papers (2021-08-25T05:12:14Z) - Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes [14.384467317051831]
We propose two novel approaches to deblurring videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames.
Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors.
arXiv Detail & Related papers (2021-08-23T07:36:49Z) - No frame left behind: Full Video Action Recognition [26.37329995193377]
We propose full video action recognition and consider all video frames.
We first cluster all frame activations along the temporal dimension.
We then temporally aggregate the frames in the clusters into a smaller number of representations.
arXiv Detail & Related papers (2021-03-29T07:44:28Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.