EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields
- URL: http://arxiv.org/abs/2310.02437v2
- Date: Wed, 6 Dec 2023 10:57:11 GMT
- Title: EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields
- Authors: Anish Bhattacharya, Ratnesh Madaan, Fernando Cladera, Sai Vemprala,
Rogerio Bonatti, Kostas Daniilidis, Ashish Kapoor, Vijay Kumar, Nikolai
Matni, Jayesh K. Gupta
- Abstract summary: EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
- Score: 80.94515892378053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present EvDNeRF, a pipeline for generating event data and training an
event-based dynamic NeRF, for the purpose of faithfully reconstructing
eventstreams on scenes with rigid and non-rigid deformations that may be too
fast to capture with a standard camera. Event cameras register asynchronous
per-pixel brightness changes at MHz rates with high dynamic range, making them
ideal for observing fast motion with almost no motion blur. Neural radiance
fields (NeRFs) offer visual-quality geometric-based learnable rendering, but
prior work with events has only considered reconstruction of static scenes. Our
EvDNeRF can predict eventstreams of dynamic scenes from a static or moving
viewpoint between any desired timestamps, thereby allowing it to be used as an
event-based simulator for a given scene. We show that by training on varied
batch sizes of events, we can improve test-time predictions of events at fine
time resolutions, outperforming baselines that pair standard dynamic NeRFs with
event generators. We release our simulated and real datasets, as well as code
for multi-view event-based data generation and the training and evaluation of
EvDNeRF models (https://github.com/anish-bhattacharya/EvDNeRF).
Related papers
- E$^3$NeRF: Efficient Event-Enhanced Neural Radiance Fields from Blurry Images [25.304680391243537]
We propose a novel Efficient Event-Enhanced NeRF (E$3$NeRF)
We leverage spatial-temporal information from the event stream to evenly distribute learning attention over temporal blur.
Experiments on both synthetic data and real-world data demonstrate that E$3$NeRF can effectively learn a sharp NeRF from blurry images.
arXiv Detail & Related papers (2024-08-03T18:47:31Z) - An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion [36.64856578682197]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
arXiv Detail & Related papers (2024-01-06T08:09:54Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Neuromorphic Imaging and Classification with Graph Learning [11.882239213276392]
Bio-inspired neuromorphic cameras asynchronously record pixel brightness changes and generate sparse event streams.
Due to the multidimensional address-event structure, most existing vision algorithms cannot properly handle asynchronous event streams.
We propose a new graph representation of the event data and couple it with a Graph Transformer to perform accurate neuromorphic classification.
arXiv Detail & Related papers (2023-09-27T12:58:18Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks [16.432164340779266]
We propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction.
Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module.
arXiv Detail & Related papers (2023-05-10T18:00:06Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.