Event-based Camera Tracker by $\nabla$t NeRF
- URL: http://arxiv.org/abs/2304.04559v1
- Date: Fri, 7 Apr 2023 16:03:21 GMT
- Title: Event-based Camera Tracker by $\nabla$t NeRF
- Authors: Mana Masuda, Yusuke Sekikawa, Hideo Saito
- Abstract summary: We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF)
We propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation.
- Score: 11.572930535988325
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: When a camera travels across a 3D world, only a fraction of pixel value
changes; an event-based camera observes the change as sparse events. How can we
utilize sparse events for efficient recovery of the camera pose? We show that
we can recover the camera pose by minimizing the error between sparse events
and the temporal gradient of the scene represented as a neural radiance field
(NeRF). To enable the computation of the temporal gradient of the scene, we
augment NeRF's camera pose as a time function. When the input pose to the NeRF
coincides with the actual pose, the output of the temporal gradient of NeRF
equals the observed intensity changes on the event's points. Using this
principle, we propose an event-based camera pose tracking framework called
TeGRA which realizes the pose update by using the sparse event's observation.
To the best of our knowledge, this is the first camera pose estimation
algorithm using the scene's implicit representation and the sparse intensity
change from events.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera [7.515256982860307]
IncEventGS is an incremental 3D Gaussian splatting reconstruction algorithm with a single event camera.
We exploit the tracking and mapping paradigm of conventional SLAM pipelines for IncEventGS.
arXiv Detail & Related papers (2024-10-10T16:54:23Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Continuous Pose for Monocular Cameras in Neural Implicit Representation [65.40527279809474]
In this paper, we showcase the effectiveness of optimizing monocular camera poses as a continuous function of time.
We exploit the proposed method in four diverse experimental settings.
Using the assumption of continuous motion, changes in pose may actually live in a manifold that has lower than 6 degrees of freedom (DOF)
We call this low DOF motion representation as the emphintrinsic motion and use the approach in vSLAM settings, showing impressive camera tracking performance.
arXiv Detail & Related papers (2023-11-28T13:14:58Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - A Preliminary Research on Space Situational Awareness Based on Event
Cameras [8.27218838055049]
Event camera is a new type of sensor that is different from traditional cameras.
The trigger event is the change of the brightness irradiated on the pixel.
Compared with traditional cameras, event cameras have the advantages of high temporal resolution, low latency, high dynamic range, low bandwidth and low power consumption.
arXiv Detail & Related papers (2022-03-24T14:36:18Z) - Event-Based Dense Reconstruction Pipeline [5.341354397748495]
Event cameras are a new type of sensors that are different from traditional cameras.
Deep learning is used to reconstruct intensity images from events.
structure from motion (SfM) is used to estimate camera intrinsic, extrinsic and sparse point cloud.
arXiv Detail & Related papers (2022-03-23T08:37:04Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.