MEFNet: Multi-scale Event Fusion Network for Motion Deblurring
- URL: http://arxiv.org/abs/2112.00167v1
- Date: Tue, 30 Nov 2021 23:18:35 GMT
- Title: MEFNet: Multi-scale Event Fusion Network for Motion Deblurring
- Authors: Lei Sun, Christos Sakaridis, Jingyun Liang, Qi Jiang, Kailun Yang,
Peng Sun, Yaozu Ye, Kaiwei Wang, and Luc Van Gool
- Abstract summary: Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
- Score: 62.60878284671317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional frame-based cameras inevitably suffer from motion blur due to
long exposure times. As a kind of bio-inspired camera, the event camera records
the intensity changes in an asynchronous way with high temporal resolution,
providing valid image degradation information within the exposure time. In this
paper, we rethink the event-based image deblurring problem and unfold it into
an end-to-end two-stage image restoration network. To effectively utilize event
information, we design (i) a novel symmetric cumulative event representation
specifically for image deblurring, and (ii) an affine event-image fusion module
applied at multiple levels of our network. We also propose an event mask gated
connection between the two stages of the network so as to avoid information
loss. At the dataset level, to foster event-based motion deblurring and to
facilitate evaluation on challenging real-world images, we introduce the
High-Quality Blur (HQBlur) dataset, captured with an event camera in an
illumination-controlled optical laboratory. Our Multi-Scale Event Fusion
Network (MEFNet) sets the new state of the art for motion deblurring,
surpassing both the prior best-performing image-based method and all
event-based methods with public implementations on the GoPro (by up to 2.38dB)
and HQBlur datasets, even in extreme blurry conditions. Source code and dataset
will be made publicly available.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Learning Parallax for Stereo Event-based Motion Deblurring [8.201943408103995]
Existing approaches rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world.
We propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet)
We build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps.
arXiv Detail & Related papers (2023-09-18T06:51:41Z) - Deformable Convolutions and LSTM-based Flexible Event Frame Fusion
Network for Motion Deblurring [7.187030024676791]
Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences.
While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output.
Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period.
It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene.
arXiv Detail & Related papers (2023-06-01T15:57:12Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - Event-guided Deblurring of Unknown Exposure Time Videos [31.992673443516235]
Event cameras can capture apparent motion with a high temporal resolution.
We propose a novel Exposure Time-based Event Selection module to selectively use event features.
Our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-13T19:46:17Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.