A Feature Matching Method Based on Multi-Level Refinement Strategy
- URL: http://arxiv.org/abs/2402.13488v2
- Date: Sun, 25 Feb 2024 08:18:46 GMT
- Title: A Feature Matching Method Based on Multi-Level Refinement Strategy
- Authors: Shaojie Zhang, Yinghui Wang, Jiaxing Ma, Wei Li, Jinlong Yang, Tao
Yan, Yukai Wang, Liangyi Huang, Mingfeng Wang, and Ibragim R. Atadjanov
- Abstract summary: Experimental results demonstrate that the KTGP-ORB method reduces the error by an average of 29.92% compared to the ORB algorithm in complex scenes with illumination variations and blur.
- Score: 11.300618381337777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature matching is a fundamental and crucial process in visual SLAM, and
precision has always been a challenging issue in feature matching. In this
paper, based on a multi-level fine matching strategy, we propose a new feature
matching method called KTGP-ORB. This method utilizes the similarity of local
appearance in the Hamming space generated by feature descriptors to establish
initial correspondences. It combines the constraint of local image motion
smoothness, uses the GMS algorithm to enhance the accuracy of initial matches,
and finally employs the PROSAC algorithm to optimize matches, achieving precise
matching based on global grayscale information in Euclidean space. Experimental
results demonstrate that the KTGP-ORB method reduces the error by an average of
29.92% compared to the ORB algorithm in complex scenes with illumination
variations and blur.
Related papers
- Shuffled Linear Regression via Spectral Matching [6.24954299842136]
Shuffled linear regression seeks to estimate latent features through a linear transformation.
This problem extends traditional least-squares (LS) and Least Absolute Shrinkage and Selection Operator (LASSO) approaches.
We propose a spectral matching method that efficiently resolves permutations.
arXiv Detail & Related papers (2024-09-30T16:26:40Z) - Stabilized Proximal-Point Methods for Federated Optimization [20.30761752651984]
Best-known communication complexity among non-accelerated algorithms is achieved by DANE, a distributed proximal-point algorithm.
Inspired by the hybrid-projection proximal-point method, we propose a novel distributed algorithm S-DANE.
We show that S-DANE achieves the best-known communication complexity while still enjoying good local computation efficiency as S-DANE.
arXiv Detail & Related papers (2024-07-09T17:56:29Z) - Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - An Error-Matching Exclusion Method for Accelerating Visual SLAM [11.300618381337777]
This paper proposes an accelerated method for Visual SLAM by integrating GMS with RANSAC for the removal of mismatched features.
Experimental results demonstrate that the proposed method achieves a comparable accuracy to the original GMS-RANSAC while reducing the average runtime by 24.13%.
arXiv Detail & Related papers (2024-02-22T07:22:45Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - SIFT Matching by Context Exposed [7.99536002595393]
This paper investigates how to step up local image descriptor matching by exploiting matching context information.
A new matching strategy and a novel local spatial filter, named respectively blob matching and Delaunay Triangulation Matching (DTM) are devised.
DTM is comparable or better than the state-of-the-art in terms of matching accuracy and robustness, especially for non-planar scenes.
arXiv Detail & Related papers (2021-06-17T15:10:59Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Robust Learning Rate Selection for Stochastic Optimization via Splitting
Diagnostic [5.395127324484869]
SplitSGD is a new dynamic learning schedule for optimization.
The method decreases the learning rate for better adaptation to the local geometry of the objective function.
It essentially does not incur additional computational cost than standard SGD.
arXiv Detail & Related papers (2019-10-18T19:38:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.