DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions
- URL: http://arxiv.org/abs/2202.02556v1
- Date: Sat, 5 Feb 2022 13:46:47 GMT
- Title: DEVO: Depth-Event Camera Visual Odometry in Challenging Conditions
- Authors: Yi-Fan Zuo, Jiaqi Yang, Jiaben Chen, Xia Wang, Yifu Wang, Laurent
Kneip
- Abstract summary: We present a novel real-time visual odometry framework for a stereo setup of a depth and high-resolution event camera.
Our framework balances accuracy and robustness against computational efficiency towards strong performance in challenging scenarios.
- Score: 30.892930944644853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel real-time visual odometry framework for a stereo setup of
a depth and high-resolution event camera. Our framework balances accuracy and
robustness against computational efficiency towards strong performance in
challenging scenarios. We extend conventional edge-based semi-dense visual
odometry towards time-surface maps obtained from event streams. Semi-dense
depth maps are generated by warping the corresponding depth values of the
extrinsically calibrated depth camera. The tracking module updates the camera
pose through efficient, geometric semi-dense 3D-2D edge alignment. Our approach
is validated on both public and self-collected datasets captured under various
conditions. We show that the proposed method performs comparable to
state-of-the-art RGB-D camera-based alternatives in regular conditions, and
eventually outperforms in challenging conditions such as high dynamics or low
illumination.
Related papers
- Cross-Modal Semi-Dense 6-DoF Tracking of an Event Camera in Challenging
Conditions [29.608665442108727]
Event-based cameras are bio-inspired visual sensors that perform well in HDR conditions and have high temporal resolution.
The present work demonstrates the feasibility of purely event-based tracking if an alternative sensor is permitted for mapping.
The method relies on geometric 3D-2D registration of semi-dense maps and events, and achieves highly reliable and accurate cross-modal tracking results.
arXiv Detail & Related papers (2024-01-16T01:48:45Z) - EVI-SAM: Robust, Real-time, Tightly-coupled Event-Visual-Inertial State Estimation and 3D Dense Mapping [5.154689086578339]
We propose EVI-SAM to tackle the problem of 6 DoF pose tracking and 3D reconstruction using monocular event camera.
A novel event-based hybrid tracking framework is designed to estimate the pose, leveraging the robustness of feature matching and the precision of direct alignment.
To the best of our knowledge, this is the first non-learning work to realize event-based dense mapping.
arXiv Detail & Related papers (2023-12-19T07:39:45Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Edge-aware Consistent Stereo Video Depth Estimation [3.611754783778107]
We propose a consistent method for dense video depth estimation.
Unlike the existing monocular methods, ours relates to stereo videos.
We show that our edge-aware stereo video model can accurately estimate the dense depth maps.
arXiv Detail & Related papers (2023-05-04T08:30:04Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.