Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion
- URL: http://arxiv.org/abs/2309.08596v1
- Date: Fri, 15 Sep 2023 17:52:08 GMT
- Title: Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion
- Authors: Weng Fei Low and Gim Hee Lee
- Abstract summary: Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
- Score: 67.15935067326662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras offer many advantages over standard cameras due to their
distinctive principle of operation: low power, low latency, high temporal
resolution and high dynamic range. Nonetheless, the success of many downstream
visual applications also hinges on an efficient and effective scene
representation, where Neural Radiance Field (NeRF) is seen as the leading
candidate. Such promise and potential of event cameras and NeRF inspired recent
works to investigate on the reconstruction of NeRF from moving event cameras.
However, these works are mainly limited in terms of the dependence on dense and
low-noise event streams, as well as generalization to arbitrary contrast
threshold values and camera speed profiles. In this work, we propose Robust
e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving
event cameras under various real-world conditions, especially from sparse and
noisy events generated under non-uniform motion. It consists of two key
components: a realistic event generation model that accounts for various
intrinsic parameters (e.g. time-independent, asymmetric threshold and
refractory period) and non-idealities (e.g. pixel-to-pixel threshold
variation), as well as a complementary pair of normalized reconstruction losses
that can effectively generalize to arbitrary speed profiles and intrinsic
parameter values without such prior knowledge. Experiments on real and novel
realistically simulated sequences verify our effectiveness. Our code, synthetic
dataset and improved event simulator are public.
Related papers
- Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - Mitigating Motion Blur in Neural Radiance Fields with Events and Frames [21.052912896866953]
We propose a novel approach to enhance NeRF reconstructions under camera motion by fusing frames and events.
We explicitly model the blur formation process, exploiting the event double integral as an additional model-based prior.
We show, on synthetic and real data, that the proposed approach outperforms existing deblur NeRFs that use only frames.
arXiv Detail & Related papers (2024-03-28T19:06:37Z) - CF-NeRF: Camera Parameter Free Neural Radiance Fields with Incremental
Learning [23.080474939586654]
We propose a novel underlinecamera parameter underlinefree neural radiance field (CF-NeRF)
CF-NeRF incrementally reconstructs 3D representations and recovers the camera parameters inspired by incremental structure from motion.
Results demonstrate that CF-NeRF is robust to camera rotation and achieves state-of-the-art results without providing prior information and constraints.
arXiv Detail & Related papers (2023-12-14T09:09:31Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - Ev-NeRF: Event Based Neural Radiance Field [8.78321125097048]
Ev-NeRF is a Neural Radiance Field derived from event data.
We show that Ev-NeRF achieves competitive performance for intensity image reconstruction under extreme noise conditions.
arXiv Detail & Related papers (2022-06-24T18:27:30Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.