IDLS: Inverse Depth Line based Visual-Inertial SLAM
- URL: http://arxiv.org/abs/2304.11748v2
- Date: Sun, 30 Jun 2024 07:50:27 GMT
- Title: IDLS: Inverse Depth Line based Visual-Inertial SLAM
- Authors: Wanting Li, Shuo Wang, Yongcai Wang, Yu Shao, Xuewei Bai, Deying Li,
- Abstract summary: Inverse Depth Line SLAM (IDLS) is proposed to track the line features in SLAM in an accurate and efficient way.
IDLS is extensively evaluated in multiple perceptually-challenging datasets.
- Score: 9.38589798999922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For robust visual-inertial SLAM in perceptually-challenging indoor environments,recent studies exploit line features to extract descriptive information about scene structure to deal with the degeneracy of point features. But existing point-line-based SLAM methods mainly use Pl\"ucker matrix or orthogonal representation to represent a line, which needs to calculate at least four variables to determine a line. Given the numerous line features to determine in each frame, the overly flexible line representation increases the computation burden and comprises the accuracy of the results. In this paper, we propose inverse depth representation for a line, which models each extracted line feature using only two variables, i.e., the inverse depths of the two ending points. It exploits the fact that the projected line's pixel coordinates on the image plane are rather accurate, which partially restrict the line. Using this compact line presentation, Inverse Depth Line SLAM (IDLS) is proposed to track the line features in SLAM in an accurate and efficient way. A robust line triangulation method and a novel line re-projection error model are introduced. And a two-step optimization method is proposed to firstly determine the lines and then to estimate the camera poses in each frame. IDLS is extensively evaluated in multiple perceptually-challenging datasets. The results show it is more accurate, robust, and needs lower computational overhead than the current state-of-the-art of point-line-based SLAM methods.
Related papers
- PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM [20.228993972678595]
In this paper, we address the utilization of line structural information and the optimization of lines in point-line SLAM systems.
We anchor lines with similar directions to a principal axis and optimize them with $n+2$ parameters for $n$ lines, solving both problems together.
Our method considers scene structural information, which can be easily extended to different world hypotheses.
arXiv Detail & Related papers (2024-10-16T07:44:56Z) - LDL: Line Distance Functions for Panoramic Localization [22.46846444866008]
We introduce LDL, an algorithm that localizes a panorama to a 3D map using line segments.
Our method effectively observes the holistic distribution of lines within panoramic images and 3D maps.
arXiv Detail & Related papers (2023-08-27T02:57:07Z) - 3D Line Mapping Revisited [86.13455066577657]
LIMAP is a library for 3D line mapping that robustly and efficiently creates 3D line maps from multi-view imagery.
Our code integrates seamlessly with existing point-based Structure-from-Motion methods.
Our robust 3D line maps also open up new research directions.
arXiv Detail & Related papers (2023-03-30T16:14:48Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - SOLD2: Self-supervised Occlusion-aware Line Description and Detection [95.8719432775724]
We introduce the first joint detection and description of line segments in a single deep network.
Our method does not require any annotated line labels and can therefore generalize to any dataset.
We evaluate our approach against previous line detection and description methods on several multi-view datasets.
arXiv Detail & Related papers (2021-04-07T19:27:17Z) - PlueckerNet: Learn to Register 3D Line Reconstructions [57.20244406275875]
This paper proposes a neural network based method to solve the problem of Aligning two partially-overlapped 3D line reconstructions in Euclidean space.
Experiments on both indoor and outdoor datasets show that the registration (rotation and translation) precision of our method outperforms baselines significantly.
arXiv Detail & Related papers (2020-12-02T11:31:56Z) - Line Flow based SLAM [36.10943109853581]
We propose a visual SLAM method by predicting and updating line flows that represent sequential 2D projections of 3D line segments.
The proposed method achieves state-of-the-art results due to the utilization of line flows.
arXiv Detail & Related papers (2020-09-21T15:55:45Z) - Deep Hough Transform for Semantic Line Detection [70.28969017874587]
We focus on a fundamental task of detecting meaningful line structures, a.k.a. semantic lines, in natural scenes.
Previous methods neglect the inherent characteristics of lines, leading to sub-optimal performance.
We propose a one-shot end-to-end learning framework for line detection.
arXiv Detail & Related papers (2020-03-10T13:08:42Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.