Inertial Guided Uncertainty Estimation of Feature Correspondence in
Visual-Inertial Odometry/SLAM
- URL: http://arxiv.org/abs/2311.03722v1
- Date: Tue, 7 Nov 2023 04:56:29 GMT
- Title: Inertial Guided Uncertainty Estimation of Feature Correspondence in
Visual-Inertial Odometry/SLAM
- Authors: Seongwook Yoon, Jaehyun Kim, and Sanghoon Sull
- Abstract summary: We propose a method to estimate the uncertainty of feature correspondence using an inertial guidance.
We also demonstrate the feasibility of our approach by incorporating it into one of recent visual-inertial odometry/SLAM algorithms.
- Score: 8.136426395547893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual odometry and Simultaneous Localization And Mapping (SLAM) has been
studied as one of the most important tasks in the areas of computer vision and
robotics, to contribute to autonomous navigation and augmented reality systems.
In case of feature-based odometry/SLAM, a moving visual sensor observes a set
of 3D points from different viewpoints, correspondences between the projected
2D points in each image are usually established by feature tracking and
matching. However, since the corresponding point could be erroneous and noisy,
reliable uncertainty estimation can improve the accuracy of odometry/SLAM
methods. In addition, inertial measurement unit is utilized to aid the visual
sensor in terms of Visual-Inertial fusion. In this paper, we propose a method
to estimate the uncertainty of feature correspondence using an inertial
guidance robust to image degradation caused by motion blur, illumination change
and occlusion. Modeling a guidance distribution to sample possible
correspondence, we fit the distribution to an energy function based on image
error, yielding more robust uncertainty than conventional methods. We also
demonstrate the feasibility of our approach by incorporating it into one of
recent visual-inertial odometry/SLAM algorithms for public datasets.
Related papers
- MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - GUPNet++: Geometry Uncertainty Propagation Network for Monocular 3D
Object Detection [95.8940731298518]
We propose a novel Geometry Uncertainty Propagation Network (GUPNet++)
It models the uncertainty propagation relationship of the geometry projection during training, improving the stability and efficiency of the end-to-end model learning.
Experiments show that the proposed approach not only obtains (state-of-the-art) SOTA performance in image-based monocular 3D detection but also demonstrates superiority in efficacy with a simplified framework.
arXiv Detail & Related papers (2023-10-24T08:45:15Z) - EDI: ESKF-based Disjoint Initialization for Visual-Inertial SLAM Systems [9.937997167972743]
We propose a novel approach for fast, accurate, and robust visual-inertial initialization.
Our method achieves an average scale error of 5.8% in less than 3 seconds.
arXiv Detail & Related papers (2023-08-04T19:06:58Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Differentiable Uncalibrated Imaging [25.67247660827913]
We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles.
We apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates.
Differentiability is key as it allows us to jointly fit a measurement representation, optimize over the uncertain measurement coordinates, and perform image reconstruction which in turn ensures consistent calibration.
arXiv Detail & Related papers (2022-11-18T22:48:09Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Beyond Photometric Consistency: Gradient-based Dissimilarity for
Improving Visual Odometry and Stereo Matching [46.27086269084186]
In this paper, we investigate a new metric for registering images that builds upon the idea of the photometric error.
We integrate both into stereo estimation as well as visual odometry systems and show clear benefits for typical disparity and direct image registration tasks.
arXiv Detail & Related papers (2020-04-08T16:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.