Event-Based Frame Interpolation with Ad-hoc Deblurring
- URL: http://arxiv.org/abs/2301.05191v1
- Date: Thu, 12 Jan 2023 18:19:00 GMT
- Title: Event-Based Frame Interpolation with Ad-hoc Deblurring
- Authors: Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, Jiezhang Cao,
Kai Zhang, Qi Jiang, Kaiwei Wang, Luc Van Gool
- Abstract summary: We propose a general method for event-based frame that performs deblurring ad-hoc on input videos.
Our network consistently outperforms state-of-the-art methods on frame, single image deblurring and the joint task of deblurring.
Our code and dataset will be made publicly available.
- Score: 68.97825675372354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of video frame interpolation is inherently correlated with
the ability to handle motion in the input scene. Even though previous works
recognize the utility of asynchronous event information for this task, they
ignore the fact that motion may or may not result in blur in the input video to
be interpolated, depending on the length of the exposure time of the frames and
the speed of the motion, and assume either that the input video is sharp,
restricting themselves to frame interpolation, or that it is blurry, including
an explicit, separate deblurring stage before interpolation in their pipeline.
We instead propose a general method for event-based frame interpolation that
performs deblurring ad-hoc and thus works both on sharp and blurry input
videos. Our model consists in a bidirectional recurrent network that naturally
incorporates the temporal dimension of interpolation and fuses information from
the input frames and the events adaptively based on their temporal proximity.
In addition, we introduce a novel real-world high-resolution dataset with
events and color videos named HighREV, which provides a challenging evaluation
setting for the examined task. Extensive experiments on the standard GoPro
benchmark and on our dataset show that our network consistently outperforms
previous state-of-the-art methods on frame interpolation, single image
deblurring and the joint task of interpolation and deblurring. Our code and
dataset will be made publicly available.
Related papers
- CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - Event-based Video Frame Interpolation with Edge Guided Motion Refinement [28.331148083668857]
We introduce an end-to-end E-VFI learning method to efficiently utilize edge features from event signals for motion flow and warping enhancement.
Our method incorporates an Edge Guided Attentive (EGA) module, which rectifies estimated video motion through attentive aggregation.
Experiments on both synthetic and real datasets show the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-04-28T12:13:34Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - E-VFIA : Event-Based Video Frame Interpolation with Attention [8.93294761619288]
We propose an event-based video frame with attention (E-VFIA) as a lightweight kernel-based method.
E-VFIA fuses event information with standard video frames by deformable convolutions to generate high quality interpolated frames.
The proposed method represents events with high temporal resolution and uses a multi-head self-attention mechanism to better encode event-based information.
arXiv Detail & Related papers (2022-09-19T21:40:32Z) - Unifying Motion Deblurring and Frame Interpolation with Events [11.173687810873433]
Slow shutter speed and long exposure time of frame-based cameras often cause visual blur and loss of inter-frame information, degenerating the overall quality of captured videos.
We present a unified framework of event-based motion deblurring and frame enhancement for blurry video enhancement, where the extremely low latency of events is leveraged to alleviate motion blur and facilitate intermediate frame prediction.
By exploring the mutual constraints among blurry frames, latent images, and event streams, we further propose a self-supervised learning framework to enable network training with real-world blurry videos and events.
arXiv Detail & Related papers (2022-03-23T03:43:12Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.