Event-guided Deblurring of Unknown Exposure Time Videos
- URL: http://arxiv.org/abs/2112.06988v1
- Date: Mon, 13 Dec 2021 19:46:17 GMT
- Title: Event-guided Deblurring of Unknown Exposure Time Videos
- Authors: Taewoo Kim, Jungmin Lee, Lin Wang and Kuk-Jin Yoon
- Abstract summary: Event cameras can capture apparent motion with a high temporal resolution.
We propose a novel Exposure Time-based Event Selection module to selectively use event features.
Our method achieves state-of-the-art performance.
- Score: 31.992673443516235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video deblurring is a highly ill-posed problem due to the loss of motion
information in the blur degradation process. Since event cameras can capture
apparent motion with a high temporal resolution, several attempts have explored
the potential of events for guiding video deblurring. These methods generally
assume that the exposure time is the same as the reciprocal of the video frame
rate. However,this is not true in real situations, and the exposure time might
be unknown and dynamically varies depending on the video shooting
environment(e.g., illumination condition). In this paper, we address the
event-guided video deblurring assuming dynamically variable unknown exposure
time of the frame-based camera. To this end, we first derive a new formulation
for event-guided video deblurring by considering the exposure and readout time
in the video frame acquisition process. We then propose a novel end-toend
learning framework for event-guided video deblurring. In particular, we design
a novel Exposure Time-based Event Selection(ETES) module to selectively use
event features by estimating the cross-modal correlation between the features
from blurred frames and the events. Moreover, we propose a feature fusion
module to effectively fuse the selected features from events and blur frames.
We conduct extensive experiments on various datasets and demonstrate that our
method achieves state-of-the-art performance. Our project code and pretrained
models will be available.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - EGVD: Event-Guided Video Deraining [57.59935209162314]
We propose an end-to-end learning-based network to unlock the potential of the event camera for video deraining.
We build a real-world dataset consisting of rainy videos and temporally synchronized event streams.
arXiv Detail & Related papers (2023-09-29T13:47:53Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Event-Based Frame Interpolation with Ad-hoc Deblurring [68.97825675372354]
We propose a general method for event-based frame that performs deblurring ad-hoc on input videos.
Our network consistently outperforms state-of-the-art methods on frame, single image deblurring and the joint task of deblurring.
Our code and dataset will be made publicly available.
arXiv Detail & Related papers (2023-01-12T18:19:00Z) - Video Frame Interpolation without Temporal Priors [91.04877640089053]
Video frame aims to synthesize non-exist intermediate frames in a video sequence.
The temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors.
We devise a novel optical flow refinement strategy for better synthesizing results.
arXiv Detail & Related papers (2021-12-02T12:13:56Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.