FFD: Fast Feature Detector
- URL: http://arxiv.org/abs/2012.00859v1
- Date: Tue, 1 Dec 2020 21:56:35 GMT
- Title: FFD: Fast Feature Detector
- Authors: Morteza Ghahremani and Yonghuai Liu and Bernard Tiddeman
- Abstract summary: We show that robust and accurate keypoints exist in the specific scale-space domain.
It is proved that setting the scale-space pyramid's smoothness ratio and blurring to 2 and 0.627, respectively, facilitates the detection of reliable keypoints.
- Score: 22.51804239092462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scale-invariance, good localization and robustness to noise and distortions
are the main properties that a local feature detector should possess. Most
existing local feature detectors find excessive unstable feature points that
increase the number of keypoints to be matched and the computational time of
the matching step. In this paper, we show that robust and accurate keypoints
exist in the specific scale-space domain. To this end, we first formulate the
superimposition problem into a mathematical model and then derive a closed-form
solution for multiscale analysis. The model is formulated via
difference-of-Gaussian (DoG) kernels in the continuous scale-space domain, and
it is proved that setting the scale-space pyramid's blurring ratio and
smoothness to 2 and 0.627, respectively, facilitates the detection of reliable
keypoints. For the applicability of the proposed model to discrete images, we
discretize it using the undecimated wavelet transform and the cubic spline
function. Theoretically, the complexity of our method is less than 5\% of that
of the popular baseline Scale Invariant Feature Transform (SIFT). Extensive
experimental results show the superiority of the proposed feature detector over
the existing representative hand-crafted and learning-based techniques in
accuracy and computational time. The code and supplementary materials can be
found at~{\url{https://github.com/mogvision/FFD}}.
Related papers
- Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - Improving Transformer-based Image Matching by Cascaded Capturing
Spatially Informative Keypoints [44.90917854990362]
We propose a transformer-based cascade matching model -- Cascade feature Matching TRansformer (CasMTR)
We use a simple yet effective Non-Maximum Suppression (NMS) post-process to filter keypoints through the confidence map.
CasMTR achieves state-of-the-art performance in indoor and outdoor pose estimation as well as visual localization.
arXiv Detail & Related papers (2023-03-06T04:32:34Z) - Hyperspherical Loss-Aware Ternary Quantization [12.90416661059601]
We show that our method can significantly improve the accuracy of ternary quantization in both image classification and object detection tasks.
The experimental results show that our method can significantly improve the accuracy of ternary quantization in both image classification and object detection tasks.
arXiv Detail & Related papers (2022-12-24T04:27:01Z) - Detecting Rotated Objects as Gaussian Distributions and Its 3-D
Generalization [81.29406957201458]
Existing detection methods commonly use a parameterized bounding box (BBox) to model and detect (horizontal) objects.
We argue that such a mechanism has fundamental limitations in building an effective regression loss for rotation detection.
We propose to model the rotated objects as Gaussian distributions.
We extend our approach from 2-D to 3-D with a tailored algorithm design to handle the heading estimation.
arXiv Detail & Related papers (2022-09-22T07:50:48Z) - Rethinking Spatial Invariance of Convolutional Networks for Object
Counting [119.83017534355842]
We try to use locally connected Gaussian kernels to replace the original convolution filter to estimate the spatial position in the density map.
Inspired by previous work, we propose a low-rank approximation accompanied with translation invariance to favorably implement the approximation of massive Gaussian convolution.
Our methods significantly outperform other state-of-the-art methods and achieve promising learning of the spatial position of objects.
arXiv Detail & Related papers (2022-06-10T17:51:25Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Large-Scale Learning with Fourier Features and Tensor Decompositions [3.6930948691311007]
We exploit the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition.
We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model.
arXiv Detail & Related papers (2021-09-03T14:12:53Z) - Learning High-Precision Bounding Box for Rotated Object Detection via
Kullback-Leibler Divergence [100.6913091147422]
Existing rotated object detectors are mostly inherited from the horizontal detection paradigm.
In this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology.
arXiv Detail & Related papers (2021-06-03T14:29:19Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Entropic gradient descent algorithms and wide flat minima [6.485776570966397]
We show analytically that there exist Bayes optimal pointwise estimators which correspond to minimizers belonging to wide flat regions.
We extend the analysis to the deep learning scenario by extensive numerical validations.
An easy to compute flatness measure shows a clear correlation with test accuracy.
arXiv Detail & Related papers (2020-06-14T13:22:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.