BEBLID: Boosted efficient binary local image descriptor
- URL: http://arxiv.org/abs/2402.04482v1
- Date: Wed, 7 Feb 2024 00:14:32 GMT
- Title: BEBLID: Boosted efficient binary local image descriptor
- Authors: Iago Su\'arez, Ghesn Sfeir, Jos\'e M. Buenaposada, Luis Baumela
- Abstract summary: We introduce BEBLID, an efficient learned binary image descriptor.
It improves our previous real-valued descriptor, BELID, making it both more efficient for matching and more accurate.
In experiments BEBLID achieves an accuracy close to SIFT and better computational efficiency than ORB, the fastest algorithm in the literature.
- Score: 2.8538628855541397
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Efficient matching of local image features is a fundamental task in many
computer vision applications. However, the real-time performance of top
matching algorithms is compromised in computationally limited devices, such as
mobile phones or drones, due to the simplicity of their hardware and their
finite energy supply. In this paper we introduce BEBLID, an efficient learned
binary image descriptor. It improves our previous real-valued descriptor,
BELID, making it both more efficient for matching and more accurate. To this
end we use AdaBoost with an improved weak-learner training scheme that produces
better local descriptions. Further, we binarize our descriptor by forcing all
weak-learners to have the same weight in the strong learner combination and
train it in an unbalanced data set to address the asymmetries arising in
matching and retrieval tasks. In our experiments BEBLID achieves an accuracy
close to SIFT and better computational efficiency than ORB, the fastest
algorithm in the literature.
Related papers
- LeRF: Learning Resampling Function for Adaptive and Efficient Image Interpolation [64.34935748707673]
Recent deep neural networks (DNNs) have made impressive progress in performance by introducing learned data priors.
We propose a novel method of Learning Resampling (termed LeRF) which takes advantage of both the structural priors learned by DNNs and the locally continuous assumption.
LeRF assigns spatially varying resampling functions to input image pixels and learns to predict the shapes of these resampling functions with a neural network.
arXiv Detail & Related papers (2024-07-13T16:09:45Z) - Residual Learning for Image Point Descriptors [56.917951170421894]
We propose a very simple and effective approach to learning local image descriptors by using a hand-crafted detector and descriptor.
We optimize the final descriptor by leveraging the knowledge already present in the handcrafted descriptor.
Our approach has potential applications in ensemble learning and learning with non-differentiable functions.
arXiv Detail & Related papers (2023-12-24T12:51:30Z) - Energy-based learning algorithms for analog computing: a comparative
study [2.0937431058291933]
Energy-based learning algorithms have recently gained a surge of interest due to their compatibility with analog hardware.
We compare seven learning algorithms, namely contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL)
We find that negative perturbations are better than positive ones, and highlight the centered variant of EP as the best-performing algorithm.
arXiv Detail & Related papers (2023-12-22T22:49:58Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Rapid Person Re-Identification via Sub-space Consistency Regularization [51.76876061721556]
Person Re-Identification (ReID) matches pedestrians across disjoint cameras.
Existing ReID methods adopting real-value feature descriptors have achieved high accuracy, but they are low in efficiency due to the slow Euclidean distance computation.
We propose a novel Sub-space Consistency Regularization (SCR) algorithm that can speed up the ReID procedure by 0.25$ times.
arXiv Detail & Related papers (2022-07-13T02:44:05Z) - FastHebb: Scaling Hebbian Training of Deep Neural Networks to ImageNet
Level [7.410940271545853]
We present FastHebb, an efficient and scalable solution for Hebbian learning.
FastHebb outperforms previous solutions by up to 50 times in terms of training speed.
For the first time, we are able to bring Hebbian algorithms to ImageNet scale.
arXiv Detail & Related papers (2022-07-07T09:04:55Z) - Federated Learning via Inexact ADMM [46.99210047518554]
In this paper, we develop an inexact alternating direction method of multipliers (ADMM)
It is both- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.
It has a high numerical performance compared with several state-of-the-art algorithms for federated learning.
arXiv Detail & Related papers (2022-04-22T09:55:33Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Revisiting Binary Local Image Description for Resource Limited Devices [2.470815298095903]
We present new binary image descriptors that emerge from the application of triplet ranking loss, hard negative mining and anchor swapping.
Bad and HashSIFT establish new operating points in the state-of-the-art's accuracy vs. resources trade-off curve.
arXiv Detail & Related papers (2021-08-18T20:42:43Z) - Boosted Locality Sensitive Hashing: Discriminative Binary Codes for
Source Separation [19.72987718461291]
We propose an adaptive boosting approach to learning locality sensitive hash codes, which represent audio spectra efficiently.
We use the learned hash codes for single-channel speech denoising tasks as an alternative to a complex machine learning model.
arXiv Detail & Related papers (2020-02-14T20:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.