Efficient Video Deblurring Guided by Motion Magnitude
- URL: http://arxiv.org/abs/2207.13374v1
- Date: Wed, 27 Jul 2022 08:57:48 GMT
- Title: Efficient Video Deblurring Guided by Motion Magnitude
- Authors: Yusheng Wang and Yunfan Lu and Ye Gao and Lin Wang and Zhihang Zhong
and Yinqiang Zheng and Atsushi Yamashita
- Abstract summary: We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
- Score: 37.25713728458234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video deblurring is a highly under-constrained problem due to the spatially
and temporally varying blur. An intuitive approach for video deblurring
includes two steps: a) detecting the blurry region in the current frame; b)
utilizing the information from clear regions in adjacent frames for current
frame deblurring. To realize this process, our idea is to detect the pixel-wise
blur level of each frame and combine it with video deblurring. To this end, we
propose a novel framework that utilizes the motion magnitude prior (MMP) as
guidance for efficient deep video deblurring. Specifically, as the pixel
movement along its trajectory during the exposure time is positively correlated
to the level of motion blur, we first use the average magnitude of optical flow
from the high-frequency sharp frames to generate the synthetic blurry frames
and their corresponding pixel-wise motion magnitude maps. We then build a
dataset including the blurry frame and MMP pairs. The MMP is then learned by a
compact CNN by regression. The MMP consists of both spatial and temporal blur
level information, which can be further integrated into an efficient recurrent
neural network (RNN) for video deblurring. We conduct intensive experiments to
validate the effectiveness of the proposed methods on the public datasets.
Related papers
- CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - A Constrained Deformable Convolutional Network for Efficient Single
Image Dynamic Scene Blind Deblurring with Spatially-Variant Motion Blur
Kernels Estimation [12.744989551644744]
We propose a novel constrained deformable convolutional network (CDCN) for efficient single image dynamic scene blind deblurring.
CDCN simultaneously achieves accurate spatially-variant motion blur kernels estimation and the high-quality image restoration.
arXiv Detail & Related papers (2022-08-23T03:28:21Z) - Deep Recurrent Neural Network with Multi-scale Bi-directional
Propagation for Video Deblurring [36.94523101375519]
We propose a deep Recurrent Neural Network with Multi-scale Bi-directional Propagation (RNN-MBP) to propagate and gather information from unaligned neighboring frames for better video deblurring.
To better evaluate the proposed algorithm and existing state-of-the-art methods on real-world blurry scenes, we also create a Real-World Blurry Video dataset.
The proposed algorithm performs favorably against the state-of-the-art methods on three typical benchmarks.
arXiv Detail & Related papers (2021-12-09T11:02:56Z) - Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes [14.384467317051831]
We propose two novel approaches to deblurring videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames.
Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors.
arXiv Detail & Related papers (2021-08-23T07:36:49Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.