Sparse Optical Flow-Based Line Feature Tracking
- URL: http://arxiv.org/abs/2204.03331v1
- Date: Thu, 7 Apr 2022 10:00:02 GMT
- Title: Sparse Optical Flow-Based Line Feature Tracking
- Authors: Qiang Fu, Hongshan Yu, Islam Ali, Hong Zhang
- Abstract summary: We propose a novel sparse optical flow (SOF)-based line feature tracking method for the camera pose estimation problem.
This method is inspired by the point-based SOF algorithm and developed based on an observation that two adjacent images satisfy brightness invariant.
Experiments in several public benchmark datasets show our method yields highly competitive accuracy with an obvious advantage over speed.
- Score: 7.166068174681434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a novel sparse optical flow (SOF)-based line feature
tracking method for the camera pose estimation problem. This method is inspired
by the point-based SOF algorithm and developed based on an observation that two
adjacent images in time-varying image sequences satisfy brightness invariant.
Based on this observation, we re-define the goal of line feature tracking:
track two endpoints of a line feature instead of the entire line based on gray
value matching instead of descriptor matching. To achieve this goal, an
efficient two endpoint tracking (TET) method is presented: first, describe a
given line feature with its two endpoints; next, track the two endpoints based
on SOF to obtain two new tracked endpoints by minimizing a pixel-level
grayscale residual function; finally, connect the two tracked endpoints to
generate a new line feature. The correspondence is established between the
given and the new line feature. Compared with current descriptor-based methods,
our TET method needs not to compute descriptors and detect line features
repeatedly. Naturally, it has an obvious advantage over computation.
Experiments in several public benchmark datasets show our method yields highly
competitive accuracy with an obvious advantage over speed.
Related papers
- Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - IDLS: Inverse Depth Line based Visual-Inertial SLAM [9.38589798999922]
Inverse Depth Line SLAM (IDLS) is proposed to track the line features in SLAM in an accurate and efficient way.
IDLS is extensively evaluated in multiple perceptually-challenging datasets.
arXiv Detail & Related papers (2023-04-23T20:53:05Z) - LOF: Structure-Aware Line Tracking based on Optical Flow [8.856222186351445]
We propose a structure-aware Line tracking algorithm based entirely on Optical Flow (LOF)
The proposed LOF outperforms the state-of-the-art performance in line tracking accuracy, robustness, and efficiency.
arXiv Detail & Related papers (2021-09-17T11:09:11Z) - DFM: A Performance Baseline for Deep Feature Matching [10.014010310188821]
The proposed method uses pre-trained VGG architecture as a feature extractor and does not require any additional training specific to improve matching.
Our algorithm achieves 0.57 and 0.80 overall scores in terms of Mean Matching Accuracy (MMA) for 1 pixel and 2 pixels thresholds respectively on Hpatches dataset.
arXiv Detail & Related papers (2021-06-14T22:55:06Z) - ABCNet v2: Adaptive Bezier-Curve Network for Real-time End-to-end Text
Spotting [108.93803186429017]
End-to-end text-spotting aims to integrate detection and recognition in a unified framework.
Here, we tackle end-to-end text spotting by presenting Adaptive Bezier Curve Network v2 (ABCNet v2)
Our main contributions are four-fold: 1) For the first time, we adaptively fit arbitrarily-shaped text by a parameterized Bezier curve, which, compared with segmentation-based methods, can not only provide structured output but also controllable representation.
Comprehensive experiments conducted on various bilingual (English and Chinese) benchmark datasets demonstrate that ABCNet v2 can achieve state-of-the
arXiv Detail & Related papers (2021-05-08T07:46:55Z) - SOLD2: Self-supervised Occlusion-aware Line Description and Detection [95.8719432775724]
We introduce the first joint detection and description of line segments in a single deep network.
Our method does not require any annotated line labels and can therefore generalize to any dataset.
We evaluate our approach against previous line detection and description methods on several multi-view datasets.
arXiv Detail & Related papers (2021-04-07T19:27:17Z) - Avoiding Degeneracy for Monocular Visual SLAM with Point and Line
Features [1.5938324336156293]
This paper presents a degeneracy avoidance method for a point and line based visual SLAM algorithm.
A novel structural constraint is proposed to avoid the degeneracy problem.
It is proven that our method yields more accurate localization as well as mapping results.
arXiv Detail & Related papers (2021-03-02T06:41:44Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - RANSAC-Flow: generic two-stage image alignment [53.11926395028508]
We show that a simple unsupervised approach performs surprisingly well across a range of tasks.
Despite its simplicity, our method shows competitive results on a range of tasks and datasets.
arXiv Detail & Related papers (2020-04-03T12:37:58Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.