Effect of Parameter Optimization on Classical and Learning-based Image
Matching Methods
- URL: http://arxiv.org/abs/2108.08179v1
- Date: Wed, 18 Aug 2021 14:45:32 GMT
- Title: Effect of Parameter Optimization on Classical and Learning-based Image
Matching Methods
- Authors: Ufuk Efe, Kutalmis Gokalp Ince, A. Aydin Alatan
- Abstract summary: We compare classical and learning-based methods by employing mutual nearest neighbor search with ratio test and optimizing the ratio test threshold.
After a fair comparison, the experimental results on HPatches dataset reveal that the performance gap between classical and learning-based methods is not that significant.
A recent approach, DFM, which only uses pre-trained VGG features as descriptors and ratio test, is shown to outperform most of the well-trained learning-based methods.
- Score: 10.014010310188821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based image matching methods are improved significantly during
the recent years. Although these methods are reported to outperform the
classical techniques, the performance of the classical methods is not examined
in detail. In this study, we compare classical and learning-based methods by
employing mutual nearest neighbor search with ratio test and optimizing the
ratio test threshold to achieve the best performance on two different
performance metrics. After a fair comparison, the experimental results on
HPatches dataset reveal that the performance gap between classical and
learning-based methods is not that significant. Throughout the experiments, we
demonstrated that SuperGlue is the state-of-the-art technique for the image
matching problem on HPatches dataset. However, if a single parameter, namely
ratio test threshold, is carefully optimized, a well-known traditional method
SIFT performs quite close to SuperGlue and even outperforms in terms of mean
matching accuracy (MMA) under 1 and 2 pixel thresholds. Moreover, a recent
approach, DFM, which only uses pre-trained VGG features as descriptors and
ratio test, is shown to outperform most of the well-trained learning-based
methods. Therefore, we conclude that the parameters of any classical method
should be analyzed carefully before comparing against a learning-based
technique.
Related papers
- Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML [9.579645248339004]
We show significant variance in fairness achieved by several algorithms and the influence of the learning pipeline on fairness scores.
We highlight that most bias mitigation techniques can achieve comparable performance.
We hope our work encourages future research on how various choices in the lifecycle of developing an algorithm impact fairness.
arXiv Detail & Related papers (2024-11-17T15:17:08Z) - Deep Learning in Medical Image Registration: Magic or Mirage? [18.620739011646123]
We make an explicit correspondence between the distribution of per-pixel intensity and labels, and the performance of classical registration methods.
We show that learning-based methods with weak supervision can perform high-fidelity intensity and label registration, which is not possible with classical methods.
arXiv Detail & Related papers (2024-08-11T18:20:08Z) - Revisiting and Maximizing Temporal Knowledge in Semi-supervised Semantic Segmentation [7.005068872406135]
Mean Teacher- and co-training-based approaches are employed to mitigate confirmation bias and coupling problems.
These approaches frequently involve complex training pipelines and a substantial computational burden.
We propose a PrevMatch framework that effectively mitigates the limitations by maximizing the utilization of the temporal knowledge obtained during the training process.
arXiv Detail & Related papers (2024-05-31T03:54:59Z) - On the efficiency of Stochastic Quasi-Newton Methods for Deep Learning [0.0]
We study the behaviour of quasi-Newton training algorithms for deep memory networks.
We show that quasi-Newtons are efficient and able to outperform in some instances the well-known first-order Adam run.
arXiv Detail & Related papers (2022-05-18T20:53:58Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - SIMPLE: SIngle-network with Mimicking and Point Learning for Bottom-up
Human Pose Estimation [81.03485688525133]
We propose a novel multi-person pose estimation framework, SIngle-network with Mimicking and Point Learning for Bottom-up Human Pose Estimation (SIMPLE)
Specifically, in the training process, we enable SIMPLE to mimic the pose knowledge from the high-performance top-down pipeline.
Besides, SIMPLE formulates human detection and pose estimation as a unified point learning framework to complement each other in single-network.
arXiv Detail & Related papers (2021-04-06T13:12:51Z) - Learning to Select Base Classes for Few-shot Classification [96.92372639495551]
We use the Similarity Ratio as an indicator for the generalization performance of a few-shot model.
We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio.
arXiv Detail & Related papers (2020-04-01T09:55:18Z) - Computed Tomography Reconstruction Using Deep Image Prior and Learned
Reconstruction Methods [0.8263596314702016]
In this work, we investigate the application of deep learning methods for computed tomography in the context of having a low-data regime.
We find that the learned primal-dual has an outstanding performance in terms of reconstruction quality and data efficiency.
The proposed methods improve the state-of-the-art results in the low data-regime.
arXiv Detail & Related papers (2020-03-10T21:03:34Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.