Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes
- URL: http://arxiv.org/abs/2108.09982v1
- Date: Mon, 23 Aug 2021 07:36:49 GMT
- Title: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes
- Authors: Hyeongseok Son, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong
Lee
- Abstract summary: We propose two novel approaches to deblurring videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames.
Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors.
- Score: 14.384467317051831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the success of video deblurring, it is essential to utilize information
from neighboring frames. Most state-of-the-art video deblurring methods adopt
motion compensation between video frames to aggregate information from multiple
frames that can help deblur a target frame. However, the motion compensation
methods adopted by previous deblurring methods are not blur-invariant, and
consequently, their accuracy is limited for blurry frames with different blur
amounts. To alleviate this problem, we propose two novel approaches to deblur
videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion
estimation accuracy between blurry frames. Second, for motion compensation,
instead of aligning frames by warping with estimated motions, we use a pixel
volume that contains candidate sharp pixels to resolve motion estimation
errors. We combine these two processes to propose an effective recurrent video
deblurring network that fully exploits deblurred previous frames. Experiments
show that our method achieves the state-of-the-art performance both
quantitatively and qualitatively compared to recent methods that use deep
learning.
Related papers
- ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler [53.98558445900626]
Current image-to-video diffusion models, while powerful in generating videos from a single frame, need adaptation for two-frame conditioned generation.
We introduce a novel, bidirectional sampling strategy to address these off-manifold issues without requiring extensive re-noising or fine-tuning.
Our method employs sequential sampling along both forward and backward paths, conditioned on the start and end frames, respectively, ensuring more coherent and on-manifold generation of intermediate frames.
arXiv Detail & Related papers (2024-10-08T03:01:54Z) - CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - TTVFI: Learning Trajectory-Aware Transformer for Video Frame
Interpolation [50.49396123016185]
Video frame (VFI) aims to synthesize an intermediate frame between two consecutive frames.
We propose a novel Trajectory-aware Transformer for Video Frame Interpolation (TTVFI)
Our method outperforms other state-of-the-art methods in four widely-used VFI benchmarks.
arXiv Detail & Related papers (2022-07-19T03:37:49Z) - Non-linear Motion Estimation for Video Frame Interpolation using
Space-time Convolutions [18.47978862083129]
Video frame aims to synthesize one or multiple frames between two consecutive frames in a video.
Some older works tackled this problem by assuming per-pixel linear motion between video frames.
We propose to approximate the per-pixel motion using a space-time convolution network that is able to adaptively select the motion model to be used.
arXiv Detail & Related papers (2022-01-27T09:49:23Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - FineNet: Frame Interpolation and Enhancement for Face Video Deblurring [18.49184807837449]
The aim of this work is to deblur face videos.
We propose a method that tackles this problem from two directions: (1) enhancing the blurry frames, and (2) treating the blurry frames as missing values and estimate them by objective.
Experiments on three real and synthetically generated video datasets show that our method outperforms the previous state-of-the-art methods by a large margin in terms of both quantitative and qualitative results.
arXiv Detail & Related papers (2021-03-01T09:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.