Avoiding Degeneracy for Monocular Visual SLAM with Point and Line
Features
- URL: http://arxiv.org/abs/2103.01501v1
- Date: Tue, 2 Mar 2021 06:41:44 GMT
- Title: Avoiding Degeneracy for Monocular Visual SLAM with Point and Line
Features
- Authors: Hyunjun Lim, Yeeun Kim, Kwangik Jung, Sumin Hu, and Hyun Myung
- Abstract summary: This paper presents a degeneracy avoidance method for a point and line based visual SLAM algorithm.
A novel structural constraint is proposed to avoid the degeneracy problem.
It is proven that our method yields more accurate localization as well as mapping results.
- Score: 1.5938324336156293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a degeneracy avoidance method for a point and line based
visual SLAM algorithm is proposed. Visual SLAM predominantly uses point
features. However, point features lack robustness in low texture and
illuminance variant environments. Therefore, line features are used to
compensate the weaknesses of point features. In addition, point features are
poor in representing discernable features for the naked eye, meaning mapped
point features cannot be recognized. To overcome the limitations above, line
features were actively employed in previous studies. However, since degeneracy
arises in the process of using line features, this paper attempts to solve this
problem. First, a simple method to identify degenerate lines is presented. In
addition, a novel structural constraint is proposed to avoid the degeneracy
problem. At last, a point and line based monocular SLAM system using a robust
optical-flow based lien tracking method is implemented. The results are
verified using experiments with the EuRoC dataset and compared with other
state-of-the-art algorithms. It is proven that our method yields more accurate
localization as well as mapping results.
Related papers
- Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv Detail & Related papers (2023-10-01T21:24:05Z) - IDLS: Inverse Depth Line based Visual-Inertial SLAM [9.38589798999922]
Inverse Depth Line SLAM (IDLS) is proposed to track the line features in SLAM in an accurate and efficient way.
IDLS is extensively evaluated in multiple perceptually-challenging datasets.
arXiv Detail & Related papers (2023-04-23T20:53:05Z) - Point-SLAM: Dense Neural Point Cloud-based SLAM [61.96492935210654]
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input.
We demonstrate that both tracking and mapping can be performed with the same point-based neural scene representation.
arXiv Detail & Related papers (2023-04-09T16:48:26Z) - Sparse Optical Flow-Based Line Feature Tracking [7.166068174681434]
We propose a novel sparse optical flow (SOF)-based line feature tracking method for the camera pose estimation problem.
This method is inspired by the point-based SOF algorithm and developed based on an observation that two adjacent images satisfy brightness invariant.
Experiments in several public benchmark datasets show our method yields highly competitive accuracy with an obvious advantage over speed.
arXiv Detail & Related papers (2022-04-07T10:00:02Z) - SOLD2: Self-supervised Occlusion-aware Line Description and Detection [95.8719432775724]
We introduce the first joint detection and description of line segments in a single deep network.
Our method does not require any annotated line labels and can therefore generalize to any dataset.
We evaluate our approach against previous line detection and description methods on several multi-view datasets.
arXiv Detail & Related papers (2021-04-07T19:27:17Z) - PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line
Features [11.990163046319974]
This paper presents PL-VINS, a real-time optimization-based monocular VINS method with point and line features.
Experiments in a public benchmark dataset show that the localization error of our method is 12-16% less than that of VINS-Mono at the same pose update frequency.
arXiv Detail & Related papers (2020-09-16T04:27:33Z) - LiPo-LCD: Combining Lines and Points for Appearance-based Loop Closure
Detection [1.6758573326215689]
LiPo-LCD is a novel appearance-based loop closure detection method.
It retrieves previously seen images using a late fusion strategy.
A simple but effective mechanism, based on the concept of island, groups similar images close in time to reduce the image candidate search effort.
arXiv Detail & Related papers (2020-09-03T10:43:16Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Offline detection of change-points in the mean for stationary graph
signals [55.98760097296213]
We propose an offline method that relies on the concept of graph signal stationarity.
Our detector comes with a proof of a non-asymptotic inequality oracle.
arXiv Detail & Related papers (2020-06-18T15:51:38Z) - Deep Hough Transform for Semantic Line Detection [70.28969017874587]
We focus on a fundamental task of detecting meaningful line structures, a.k.a. semantic lines, in natural scenes.
Previous methods neglect the inherent characteristics of lines, leading to sub-optimal performance.
We propose a one-shot end-to-end learning framework for line detection.
arXiv Detail & Related papers (2020-03-10T13:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.