Event-Based Dense Reconstruction Pipeline
- URL: http://arxiv.org/abs/2203.12270v1
- Date: Wed, 23 Mar 2022 08:37:04 GMT
- Title: Event-Based Dense Reconstruction Pipeline
- Authors: Kun Xiao, Guohui Wang, Yi Chen, Jinghong Nan, Yongfeng Xie
- Abstract summary: Event cameras are a new type of sensors that are different from traditional cameras.
Deep learning is used to reconstruct intensity images from events.
structure from motion (SfM) is used to estimate camera intrinsic, extrinsic and sparse point cloud.
- Score: 5.341354397748495
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Event cameras are a new type of sensors that are different from traditional
cameras. Each pixel is triggered asynchronously by event. The trigger event is
the change of the brightness irradiated on the pixel. If the increment or
decrement of brightness is higher than a certain threshold, an event is output.
Compared with traditional cameras, event cameras have the advantages of high
dynamic range and no motion blur. Since events are caused by the apparent
motion of intensity edges, the majority of 3D reconstructed maps consist only
of scene edges, i.e., semi-dense maps, which is not enough for some
applications. In this paper, we propose a pipeline to realize event-based dense
reconstruction. First, deep learning is used to reconstruct intensity images
from events. And then, structure from motion (SfM) is used to estimate camera
intrinsic, extrinsic and sparse point cloud. Finally, multi-view stereo (MVS)
is used to complete dense reconstruction.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera [7.515256982860307]
IncEventGS is an incremental 3D Gaussian splatting reconstruction algorithm with a single event camera.
We exploit the tracking and mapping paradigm of conventional SLAM pipelines for IncEventGS.
arXiv Detail & Related papers (2024-10-10T16:54:23Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Temporal-Mapping Photography for Event Cameras [5.344756442054121]
Event cameras, or Dynamic Vision Sensors (DVS), capture brightness changes as a continuous stream of "events"
Converting sparse events to dense intensity frames faithfully has long been an ill-posed problem.
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - Dense Voxel 3D Reconstruction Using a Monocular Event Camera [5.599072208069752]
Event cameras offer many advantages over conventional frame-based cameras.
Their application in 3D reconstruction for VR applications is underexplored.
We propose a novel approach for solving dense 3D reconstruction using only a single event camera.
arXiv Detail & Related papers (2023-09-01T10:46:57Z) - Event-based Camera Tracker by $\nabla$t NeRF [11.572930535988325]
We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF)
We propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation.
arXiv Detail & Related papers (2023-04-07T16:03:21Z) - Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused
Events Fusion [14.15744053080529]
Event cameras are bio-inspired sensors that offer advantages over traditional cameras.
We tackle the problem of event-based stereo 3D reconstruction for SLAM.
We develop fusion theory and apply it to design multi-camera 3D reconstruction algorithms.
arXiv Detail & Related papers (2022-07-21T14:19:39Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.