Deformable Neural Radiance Fields using RGB and Event Cameras
- URL: http://arxiv.org/abs/2309.08416v2
- Date: Mon, 25 Sep 2023 14:41:29 GMT
- Title: Deformable Neural Radiance Fields using RGB and Event Cameras
- Authors: Qi Ma, Danda Pani Paudel, Ajad Chhatkuli, Luc Van Gool
- Abstract summary: We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
- Score: 65.40527279809474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling Neural Radiance Fields for fast-moving deformable objects from
visual data alone is a challenging problem. A major issue arises due to the
high deformation and low acquisition rates. To address this problem, we propose
to use event cameras that offer very fast acquisition of visual change in an
asynchronous manner. In this work, we develop a novel method to model the
deformable neural radiance fields using RGB and event cameras. The proposed
method uses the asynchronous stream of events and calibrated sparse RGB frames.
In our setup, the camera pose at the individual events required to integrate
them into the radiance fields remains unknown. Our method jointly optimizes
these poses and the radiance field. This happens efficiently by leveraging the
collection of events at once and actively sampling the events during learning.
Experiments conducted on both realistically rendered graphics and real-world
datasets demonstrate a significant benefit of the proposed method over the
state-of-the-art and the compared baseline.
This shows a promising direction for modeling deformable neural radiance
fields in real-world dynamic scenes.
Related papers
- E$^3$NeRF: Efficient Event-Enhanced Neural Radiance Fields from Blurry Images [25.304680391243537]
We propose a novel Efficient Event-Enhanced NeRF (E$3$NeRF)
We leverage spatial-temporal information from the event stream to evenly distribute learning attention over temporal blur.
Experiments on both synthetic data and real-world data demonstrate that E$3$NeRF can effectively learn a sharp NeRF from blurry images.
arXiv Detail & Related papers (2024-08-03T18:47:31Z) - Mitigating Motion Blur in Neural Radiance Fields with Events and Frames [21.052912896866953]
We propose a novel approach to enhance NeRF reconstructions under camera motion by fusing frames and events.
We explicitly model the blur formation process, exploiting the event double integral as an additional model-based prior.
We show, on synthetic and real data, that the proposed approach outperforms existing deblur NeRFs that use only frames.
arXiv Detail & Related papers (2024-03-28T19:06:37Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Matching Neuromorphic Events and Color Images via Adversarial Learning [49.447580124957966]
We propose the Event-Based Image Retrieval (EBIR) problem to exploit the cross-modal matching task.
We address the EBIR problem by proposing neuromorphic Events-Color image Feature Learning (ECFL)
We also contribute to the community N-UKbench and EC180 dataset to promote the development of EBIR problem.
arXiv Detail & Related papers (2020-03-02T02:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.