An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion
- URL: http://arxiv.org/abs/2401.03153v1
- Date: Sat, 6 Jan 2024 08:09:54 GMT
- Title: An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion
- Authors: Bo Zhang, Yuqi Han, Jinli Suo, Qionghai Dai
- Abstract summary: Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
- Score: 36.64856578682197
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Event cameras or dynamic vision sensors (DVS) record asynchronous response to
brightness changes instead of conventional intensity frames, and feature
ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great
advantages in challenging scenarios with fast motion and large dynamic range.
However, the recorded events might be highly sparse due to either limited
hardware bandwidth or extreme photon starvation in harsh environments. To
unlock the full potential of event cameras, we propose an inventive event
sequence completion approach conforming to the unique characteristics of event
data in both the processing stage and the output form. Specifically, we treat
event streams as 3D event clouds in the spatiotemporal domain, develop a
diffusion-based generative model to generate dense clouds in a coarse-to-fine
manner, and recover exact timestamps to maintain the temporal resolution of raw
data successfully. To validate the effectiveness of our method comprehensively,
we perform extensive experiments on three widely used public datasets with
different spatial resolutions, and additionally collect a novel event dataset
covering diverse scenarios with highly dynamic motions and under harsh
illumination. Besides generating high-quality dense events, our method can
benefit downstream applications such as object classification and intensity
frame reconstruction.
Related papers
- Dynamic Subframe Splitting and Spatio-Temporal Motion Entangled Sparse Attention for RGB-E Tracking [32.86991031493605]
Event-based bionic camera captures dynamic scenes with high temporal resolution and high dynamic range.
We propose a dynamic event subframe splitting strategy to split the event stream into more fine-grained event clusters.
Based on this, we design an event-based sparse attention mechanism to enhance the interaction of event features in temporal and spatial dimensions.
arXiv Detail & Related papers (2024-09-26T06:12:08Z) - LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising [19.51468512911655]
Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference.
We construct a new paired real-world event denoising dataset (LED), including 3K sequences with 18K seconds of high-resolution (1200*680) event streams.
We propose a novel effective denoising framework(DED) using homogeneous dual events to generate the GT with better separating noise from the raw.
arXiv Detail & Related papers (2024-05-30T06:02:35Z) - EventZoom: A Progressive Approach to Event-Based Data Augmentation for Enhanced Neuromorphic Vision [9.447299017563841]
Dynamic Vision Sensors (DVS) capture event data with high temporal resolution and low power consumption.
Event data augmentation serve as an essential method for overcoming the limitation of scale and diversity in event datasets.
arXiv Detail & Related papers (2024-05-29T08:39:31Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.