CodedVO: Coded Visual Odometry
- URL: http://arxiv.org/abs/2407.18240v1
- Date: Thu, 25 Jul 2024 17:54:58 GMT
- Title: CodedVO: Coded Visual Odometry
- Authors: Sachin Shah, Naitri Rajyaguru, Chahat Deep Singh, Christopher Metzler, Yiannis Aloimonos,
- Abstract summary: We present CodedVO, a novel monocular visual odometry method that overcomes the scale ambiguity problem.
We demonstrate our method in diverse indoor environments and demonstrate its robustness and adaptability.
- Score: 11.33375308762075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous robots often rely on monocular cameras for odometry estimation and navigation. However, the scale ambiguity problem presents a critical barrier to effective monocular visual odometry. In this paper, we present CodedVO, a novel monocular visual odometry method that overcomes the scale ambiguity problem by employing custom optics to physically encode metric depth information into imagery. By incorporating this information into our odometry pipeline, we achieve state-of-the-art performance in monocular visual odometry with a known scale. We evaluate our method in diverse indoor environments and demonstrate its robustness and adaptability. We achieve a 0.08m average trajectory error in odometry evaluation on the ICL-NUIM indoor odometry dataset.
Related papers
- Inertial Guided Uncertainty Estimation of Feature Correspondence in
Visual-Inertial Odometry/SLAM [8.136426395547893]
We propose a method to estimate the uncertainty of feature correspondence using an inertial guidance.
We also demonstrate the feasibility of our approach by incorporating it into one of recent visual-inertial odometry/SLAM algorithms.
arXiv Detail & Related papers (2023-11-07T04:56:29Z) - The Drunkard's Odometry: Estimating Camera Motion in Deforming Scenes [79.00228778543553]
This dataset is the first large set of exploratory camera trajectories with ground truth inside 3D scenes.
Simulations in realistic 3D buildings lets us obtain a vast amount of data and ground truth labels.
We present a novel deformable odometry method, dubbed the Drunkard's Odometry, which decomposes optical flow estimates into rigid-body camera motion.
arXiv Detail & Related papers (2023-06-29T13:09:31Z) - Transformer-based model for monocular visual odometry: a video
understanding approach [0.9790236766474201]
We deal with the monocular visual odometry as a video understanding task to estimate the 6-F camera's pose.
We contribute by presenting the TS-DoVO model based on on-temporal self-attention mechanisms to extract features from clips and estimate the motions in an end-to-end manner.
Our approach achieved competitive state-of-the-art performance compared with geometry-based and deep learning-based methods on the KITTI visual odometry dataset.
arXiv Detail & Related papers (2023-05-10T13:11:23Z) - Dense Prediction Transformer for Scale Estimation in Monocular Visual
Odometry [0.0]
This paper contributes by showing an application of the dense prediction transformer model for scale estimation in monocular visual odometry systems.
Experimental results show that the scale drift problem of monocular systems can be reduced through the accurate estimation of the depth map.
arXiv Detail & Related papers (2022-10-04T16:29:21Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - MBA-VO: Motion Blur Aware Visual Odometry [99.56896875807635]
Motion blur is one of the major challenges remaining for visual odometry methods.
In low-light conditions where longer exposure times are necessary, motion blur can appear even for relatively slow camera motions.
We present a novel hybrid visual odometry pipeline with direct approach that explicitly models and estimates the camera's local trajectory within the exposure time.
arXiv Detail & Related papers (2021-03-25T09:02:56Z) - A Review of Visual Odometry Methods and Its Applications for Autonomous
Driving [0.0]
This paper presents a review of methods that are pertinent to visual odometry with an emphasis on autonomous driving.
Discussions are drawn to outline the problems faced in the current state of research, and to summarise the works reviewed.
arXiv Detail & Related papers (2020-09-19T09:13:27Z) - Beyond Photometric Consistency: Gradient-based Dissimilarity for
Improving Visual Odometry and Stereo Matching [46.27086269084186]
In this paper, we investigate a new metric for registering images that builds upon the idea of the photometric error.
We integrate both into stereo estimation as well as visual odometry systems and show clear benefits for typical disparity and direct image registration tasks.
arXiv Detail & Related papers (2020-04-08T16:13:25Z) - D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual
Odometry [57.5549733585324]
D3VO is a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation.
We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision.
We model the photometric uncertainties of pixels on the input images, which improves the depth estimation accuracy.
arXiv Detail & Related papers (2020-03-02T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.