Deep Event Visual Odometry
- URL: http://arxiv.org/abs/2312.09800v1
- Date: Fri, 15 Dec 2023 14:00:00 GMT
- Title: Deep Event Visual Odometry
- Authors: Simon Klenk, Marvin Motzet, Lukas Koestler, Daniel Cremers
- Abstract summary: Event cameras offer the exciting possibility of tracking the camera's pose during high-speed motion.
Existing event-based monocular visual odometry approaches demonstrate limited performance on recent benchmarks.
We present Deep Event VO (DEVO), the first monocular event-only system with strong performance on a large number of real-world benchmarks.
- Score: 40.57142632274148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras offer the exciting possibility of tracking the camera's pose
during high-speed motion and in adverse lighting conditions. Despite this
promise, existing event-based monocular visual odometry (VO) approaches
demonstrate limited performance on recent benchmarks. To address this
limitation, some methods resort to additional sensors such as IMUs, stereo
event cameras, or frame-based cameras. Nonetheless, these additional sensors
limit the application of event cameras in real-world devices since they
increase cost and complicate system requirements. Moreover, relying on a
frame-based camera makes the system susceptible to motion blur and HDR. To
remove the dependency on additional sensors and to push the limits of using
only a single event camera, we present Deep Event VO (DEVO), the first
monocular event-only system with strong performance on a large number of
real-world benchmarks. DEVO sparsely tracks selected event patches over time. A
key component of DEVO is a novel deep patch selection mechanism tailored to
event data. We significantly decrease the pose tracking error on seven
real-world benchmarks by up to 97% compared to event-only methods and often
surpass or are close to stereo or inertial methods. Code is available at
https://github.com/tum-vision/DEVO
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - A Preliminary Research on Space Situational Awareness Based on Event
Cameras [8.27218838055049]
Event camera is a new type of sensor that is different from traditional cameras.
The trigger event is the change of the brightness irradiated on the pixel.
Compared with traditional cameras, event cameras have the advantages of high temporal resolution, low latency, high dynamic range, low bandwidth and low power consumption.
arXiv Detail & Related papers (2022-03-24T14:36:18Z) - E$^2$(GO)MOTION: Motion Augmented Event Stream for Egocentric Action
Recognition [21.199869051111367]
Event cameras capture pixel-level intensity changes in the form of "events"
N-EPIC-Kitchens is the first event-based camera extension of the large-scale EPIC-Kitchens dataset.
We show that event data provides a comparable performance to RGB and optical flow, yet without any additional flow computation at deploy time.
arXiv Detail & Related papers (2021-12-07T09:43:08Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Moving Object Detection for Event-based vision using Graph Spectral
Clustering [6.354824287948164]
Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
arXiv Detail & Related papers (2021-09-30T10:19:22Z) - EVReflex: Dense Time-to-Impact Prediction for Event-based Obstacle
Avoidance [28.88113725832339]
We show that the fusion of events and depth overcomes the failure cases of each individual modality when performing obstacle avoidance.
Our proposed approach unifies event camera and lidar streams to estimate metric time-to-impact without prior knowledge of the scene geometry or obstacles.
arXiv Detail & Related papers (2021-09-01T14:34:20Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.