EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
- URL: http://arxiv.org/abs/2206.11896v3
- Date: Fri, 24 Mar 2023 16:57:36 GMT
- Title: EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
- Authors: Viktor Rudnev and Mohamed Elgharib and Christian Theobalt and
Vladislav Golyanik
- Abstract summary: This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
- Score: 81.19234142730326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Asynchronously operating event cameras find many applications due to their
high dynamic range, vanishingly low motion blur, low latency and low data
bandwidth. The field saw remarkable progress during the last few years, and
existing event-based 3D reconstruction approaches recover sparse point clouds
of the scene. However, such sparsity is a limiting factor in many cases,
especially in computer vision and graphics, that has not been addressed
satisfactorily so far. Accordingly, this paper proposes the first approach for
3D-consistent, dense and photorealistic novel view synthesis using just a
single colour event stream as input. At its core is a neural radiance field
trained entirely in a self-supervised manner from events while preserving the
original resolution of the colour event channels. Next, our ray sampling
strategy is tailored to events and allows for data-efficient training. At test,
our method produces results in the RGB space at unprecedented quality. We
evaluate our method qualitatively and numerically on several challenging
synthetic and real scenes and show that it produces significantly denser and
more visually appealing renderings than the existing methods. We also
demonstrate robustness in challenging scenarios with fast motion and under low
lighting conditions. We release the newly recorded dataset and our source code
to facilitate the research field, see https://4dqv.mpi-inf.mpg.de/EventNeRF.
Related papers
- E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
We propose E-3DGS, a novel event-based approach that partitions events into motion and exposure.
We introduce a novel integration of 3DGS with exposure events for high-quality reconstruction of explicit scene representations.
Our method is faster and delivers better reconstruction quality than event-based NeRF while being more cost-effective than NeRF methods.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - E$^3$NeRF: Efficient Event-Enhanced Neural Radiance Fields from Blurry Images [25.304680391243537]
We propose a novel Efficient Event-Enhanced NeRF (E$3$NeRF)
We leverage spatial-temporal information from the event stream to evenly distribute learning attention over temporal blur.
Experiments on both synthetic data and real-world data demonstrate that E$3$NeRF can effectively learn a sharp NeRF from blurry images.
arXiv Detail & Related papers (2024-08-03T18:47:31Z) - SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera [78.20482568602993]
Conventional RGB cameras are susceptible to motion blur.
Neuromorphic cameras like event and spike cameras inherently capture more comprehensive temporal information.
Our design can enhance novel view synthesis across NeRF and 3DGS.
arXiv Detail & Related papers (2024-04-10T03:31:32Z) - An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion [36.64856578682197]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
arXiv Detail & Related papers (2024-01-06T08:09:54Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.