Learning Parallax for Stereo Event-based Motion Deblurring
- URL: http://arxiv.org/abs/2309.09513v1
- Date: Mon, 18 Sep 2023 06:51:41 GMT
- Title: Learning Parallax for Stereo Event-based Motion Deblurring
- Authors: Mingyuan Lin, Chi Zhang, Chu He, Lei Yu
- Abstract summary: Existing approaches rely on the perfect pixel-wise alignment between intensity images and events, which is not always fulfilled in the real world.
We propose a novel coarse-to-fine framework, named NETwork of Event-based motion Deblurring with STereo event and intensity cameras (St-EDNet)
We build a new dataset with STereo Event and Intensity Cameras (StEIC), containing real-world events, intensity images, and dense disparity maps.
- Score: 8.201943408103995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the extremely low latency, events have been recently exploited to
supplement lost information for motion deblurring. Existing approaches largely
rely on the perfect pixel-wise alignment between intensity images and events,
which is not always fulfilled in the real world. To tackle this problem, we
propose a novel coarse-to-fine framework, named NETwork of Event-based motion
Deblurring with STereo event and intensity cameras (St-EDNet), to recover
high-quality images directly from the misaligned inputs, consisting of a single
blurry image and the concurrent event streams. Specifically, the coarse spatial
alignment of the blurry image and the event streams is first implemented with a
cross-modal stereo matching module without the need for ground-truth depths.
Then, a dual-feature embedding architecture is proposed to gradually build the
fine bidirectional association of the coarsely aligned data and reconstruct the
sequence of the latent sharp images. Furthermore, we build a new dataset with
STereo Event and Intensity Cameras (StEIC), containing real-world events,
intensity images, and dense disparity maps. Experiments on real-world datasets
demonstrate the superiority of the proposed network over state-of-the-art
methods.
Related papers
- LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction [8.163356555241322]
We propose a novel framework, called LaSe-E2V, that can achieve semantic-aware high-quality E2V reconstruction.
We first propose an Event-guided Spatiotemporal Attention (ESA) module to condition the event data to the denoising pipeline effectively.
We then introduce an event-aware mask loss to ensure temporal coherence and a noise strategy to enhance spatial consistency.
arXiv Detail & Related papers (2024-07-08T01:40:32Z) - CrossZoom: Simultaneously Motion Deblurring and Event Super-Resolving [38.96663258582471]
CrossZoom is a novel unified neural Network (CZ-Net) to jointly recover sharp latent sequences within the exposure period of a blurry input and the corresponding High-Resolution (HR) events.
We present a multi-scale blur-event fusion architecture that leverages the scale-variant properties and effectively fuses cross-modality information to achieve cross-enhancement.
We propose a new dataset containing HR sharp-blurry images and the corresponding HR-LR event streams to facilitate future research.
arXiv Detail & Related papers (2023-09-29T03:27:53Z) - Revisiting Event-based Video Frame Interpolation [49.27404719898305]
Dynamic vision sensors or event cameras provide rich complementary information for video frame.
estimating optical flow from events is arguably more difficult than from RGB information.
We propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages.
arXiv Detail & Related papers (2023-07-24T06:51:07Z) - Video Frame Interpolation with Stereo Event and Intensity Camera [40.07341828127157]
We propose a novel Stereo Event-based VFI network (SE-VFI-Net) to generate high-quality intermediate frames.
We exploit the fused features accomplishing accurate optical flow and disparity estimation.
Our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-07-17T04:02:00Z) - Learning to Super-Resolve Blurry Images with Events [62.61911224564196]
Super-Resolution from a single motion Blurred image (SRB) is a severely ill-posed problem due to the joint degradation of motion blurs and low spatial resolution.
We employ events to alleviate the burden of SRB and propose an Event-enhanced SRB (E-SRB) algorithm.
We show that the proposed eSL-Net++ outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-02-27T13:46:42Z) - Event-Based Frame Interpolation with Ad-hoc Deblurring [68.97825675372354]
We propose a general method for event-based frame that performs deblurring ad-hoc on input videos.
Our network consistently outperforms state-of-the-art methods on frame, single image deblurring and the joint task of deblurring.
Our code and dataset will be made publicly available.
arXiv Detail & Related papers (2023-01-12T18:19:00Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.