LOF: Structure-Aware Line Tracking based on Optical Flow
- URL: http://arxiv.org/abs/2109.08466v1
- Date: Fri, 17 Sep 2021 11:09:11 GMT
- Title: LOF: Structure-Aware Line Tracking based on Optical Flow
- Authors: Meixiang Quan, Zheng Chai, Xiao Liu
- Abstract summary: We propose a structure-aware Line tracking algorithm based entirely on Optical Flow (LOF)
The proposed LOF outperforms the state-of-the-art performance in line tracking accuracy, robustness, and efficiency.
- Score: 8.856222186351445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lines provide the significantly richer geometric structural information about
the environment than points, so lines are widely used in recent Visual Odometry
(VO) works. Since VO with lines use line tracking results to locate and map,
line tracking is a crucial component in VO. Although the state-of-the-art line
tracking methods have made great progress, they are still heavily dependent on
line detection or the predicted line segments. In order to relieve the
dependencies described above to track line segments completely, accurately, and
robustly at higher computational efficiency, we propose a structure-aware Line
tracking algorithm based entirely on Optical Flow (LOF). Firstly, we propose a
gradient-based strategy to sample pixels on lines that are suitable for line
optical flow calculation. Then, in order to align the lines by fully using the
structural relationship between the sampled points on it and effectively
removing the influence of sampled points on it occluded by other objects, we
propose a two-step structure-aware line segment alignment method. Furthermore,
we propose a line refinement method to refine the orientation, position, and
endpoints of the aligned line segments. Extensive experimental results
demonstrate that the proposed LOF outperforms the state-of-the-art performance
in line tracking accuracy, robustness, and efficiency, which also improves the
location accuracy and robustness of VO system with lines.
Related papers
- PAPL-SLAM: Principal Axis-Anchored Monocular Point-Line SLAM [20.228993972678595]
In this paper, we address the utilization of line structural information and the optimization of lines in point-line SLAM systems.
We anchor lines with similar directions to a principal axis and optimize them with $n+2$ parameters for $n$ lines, solving both problems together.
Our method considers scene structural information, which can be easily extended to different world hypotheses.
arXiv Detail & Related papers (2024-10-16T07:44:56Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Level-line Guided Edge Drawing for Robust Line Segment Detection [38.21854942764346]
This paper proposes a level-line guided edge drawing for robust line segment detection (GEDRLSD)
The level-line information provides potential directions for edge tracking, which could be served as a guideline for accurate edge drawing.
Numerical experiments show the superiority of the proposed GEDRLSD algorithm compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-05-10T04:03:59Z) - IDLS: Inverse Depth Line based Visual-Inertial SLAM [9.38589798999922]
Inverse Depth Line SLAM (IDLS) is proposed to track the line features in SLAM in an accurate and efficient way.
IDLS is extensively evaluated in multiple perceptually-challenging datasets.
arXiv Detail & Related papers (2023-04-23T20:53:05Z) - 3D Line Mapping Revisited [86.13455066577657]
LIMAP is a library for 3D line mapping that robustly and efficiently creates 3D line maps from multi-view imagery.
Our code integrates seamlessly with existing point-based Structure-from-Motion methods.
Our robust 3D line maps also open up new research directions.
arXiv Detail & Related papers (2023-03-30T16:14:48Z) - DeepLSD: Line Segment Detection and Refinement with Deep Image Gradients [105.25109274550607]
Line segments are increasingly used in vision tasks.
Traditional line detectors based on the image gradient are extremely fast and accurate, but lack robustness in noisy images and challenging conditions.
We propose to combine traditional and learned approaches to get the best of both worlds: an accurate and robust line detector.
arXiv Detail & Related papers (2022-12-15T12:36:49Z) - ELSD: Efficient Line Segment Detector and Descriptor [9.64386089593887]
We present the novel Efficient Line Segment Detector and Descriptor (ELSD) to simultaneously detect line segments and extract their descriptors in an image.
ELSD provides the essential line features to the higher-level tasks like SLAM and image matching in real time.
In the experiments, the proposed ELSD achieves the state-of-the-art performance on the Wireframe dataset and YorkUrban dataset.
arXiv Detail & Related papers (2021-04-29T08:53:03Z) - SOLD2: Self-supervised Occlusion-aware Line Description and Detection [95.8719432775724]
We introduce the first joint detection and description of line segments in a single deep network.
Our method does not require any annotated line labels and can therefore generalize to any dataset.
We evaluate our approach against previous line detection and description methods on several multi-view datasets.
arXiv Detail & Related papers (2021-04-07T19:27:17Z) - Deep Hough Transform for Semantic Line Detection [70.28969017874587]
We focus on a fundamental task of detecting meaningful line structures, a.k.a. semantic lines, in natural scenes.
Previous methods neglect the inherent characteristics of lines, leading to sub-optimal performance.
We propose a one-shot end-to-end learning framework for line detection.
arXiv Detail & Related papers (2020-03-10T13:08:42Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.