E-NeRF: Neural Radiance Fields from a Moving Event Camera
- URL: http://arxiv.org/abs/2208.11300v1
- Date: Wed, 24 Aug 2022 04:53:32 GMT
- Title: E-NeRF: Neural Radiance Fields from a Moving Event Camera
- Authors: Simon Klenk, Lukas Koestler, Davide Scaramuzza, Daniel Cremers
- Abstract summary: Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
- Score: 83.91656576631031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating neural radiance fields (NeRFs) from ideal images has been
extensively studied in the computer vision community. Most approaches assume
optimal illumination and slow camera motion. These assumptions are often
violated in robotic applications, where images contain motion blur and the
scene may not have suitable illumination. This can cause significant problems
for downstream tasks such as navigation, inspection or visualization of the
scene. To alleviate these problems we present E-NeRF, the first method which
estimates a volumetric scene representation in the form of a NeRF from a
fast-moving event camera. Our method can recover NeRFs during very fast motion
and in high dynamic range conditions, where frame-based approaches fail. We
show that rendering high-quality frames is possible by only providing an event
stream as input. Furthermore, by combining events and frames, we can estimate
NeRFs of higher quality than state-of-the-art approaches under severe motion
blur. We also show that combining events and frames can overcome failure cases
of NeRF estimation in scenarios where only few input views are available,
without requiring additional regularization.
Related papers
- LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes [38.59630957057759]
We propose a novel model, named LuSh-NeRF, which can reconstruct a clean and sharp NeRF from a group of hand-held low-light images.
LuSh-NeRF includes a Scene-Noise Decomposition module for decoupling the noise from the scene representation.
Experiments show that LuSh-NeRF outperforms existing approaches.
arXiv Detail & Related papers (2024-11-11T07:22:31Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - Robustifying the Multi-Scale Representation of Neural Radiance Fields [86.69338893753886]
We present a robust multi-scale neural radiance fields representation approach to overcome both real-world imaging issues.
Our method handles multi-scale imaging effects and camera-pose estimation problems with NeRF-inspired approaches.
We demonstrate, with examples, that for an accurate neural representation of an object from day-to-day acquired multi-view images, it is crucial to have precise camera-pose estimates.
arXiv Detail & Related papers (2022-10-09T11:46:45Z) - Ev-NeRF: Event Based Neural Radiance Field [8.78321125097048]
Ev-NeRF is a Neural Radiance Field derived from event data.
We show that Ev-NeRF achieves competitive performance for intensity image reconstruction under extreme noise conditions.
arXiv Detail & Related papers (2022-06-24T18:27:30Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.