A Survey on Deep Hashing Methods
- URL: http://arxiv.org/abs/2003.03369v5
- Date: Sat, 23 Apr 2022 11:43:07 GMT
- Title: A Survey on Deep Hashing Methods
- Authors: Xiao Luo, Haixin Wang, Daqing Wu, Chong Chen, Minghua Deng, Jianqiang
Huang, Xian-Sheng Hua
- Abstract summary: Nearest neighbor search aims to obtain the samples in the database with the smallest distances from them to the queries.
With the development of deep learning, deep hashing methods show more advantages than traditional methods.
Deep supervised hashing is categorized into pairwise methods, ranking-based methods, pointwise methods and quantization.
Deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods.
- Score: 52.326472103233854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nearest neighbor search aims to obtain the samples in the database with the
smallest distances from them to the queries, which is a basic task in a range
of fields, including computer vision and data mining. Hashing is one of the
most widely used methods for its computational and storage efficiency. With the
development of deep learning, deep hashing methods show more advantages than
traditional methods. In this survey, we detailedly investigate current deep
hashing algorithms including deep supervised hashing and deep unsupervised
hashing. Specifically, we categorize deep supervised hashing methods into
pairwise methods, ranking-based methods, pointwise methods as well as
quantization according to how measuring the similarities of the learned hash
codes. Moreover, deep unsupervised hashing is categorized into similarity
reconstruction-based methods, pseudo-label-based methods and prediction-free
self-supervised learning-based methods based on their semantic learning
manners. We also introduce three related important topics including
semi-supervised deep hashing, domain adaption deep hashing and multi-modal deep
hashing. Meanwhile, we present some commonly used public datasets and the
scheme to measure the performance of deep hashing algorithms. Finally, we
discuss some potential research directions in conclusion.
Related papers
- Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Deep Self-Adaptive Hashing for Image Retrieval [16.768754022585057]
We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
arXiv Detail & Related papers (2021-08-16T13:53:20Z) - Unsupervised Multi-Index Semantic Hashing [23.169142004594434]
We propose an unsupervised hashing model that learns hash codes that are both effective and highly efficient by being optimized for multi-index hashing.
We experimentally compare MISH to state-of-the-art semantic hashing baselines in the task of document similarity search.
We find that even though multi-index hashing also improves the efficiency of the baselines compared to a linear scan, they are still upwards of 33% slower than MISH.
arXiv Detail & Related papers (2021-03-26T13:33:48Z) - Improved Deep Classwise Hashing With Centers Similarity Learning for
Image Retrieval [19.052163348920512]
We propose an improved deep classwise hashing, which enables hashing learning and class centers learning simultaneously.
The proposed method effectively surpasses the original method and outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2021-03-17T05:01:13Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Momentum Uncertainty Hashing [65.27971340060687]
We propose a novel Deep Momentum Uncertainty Hashing (DMUH)
It explicitly estimates the uncertainty during training and leverages the uncertainty information to guide the approximation process.
Our method achieves the best performance on all of the datasets and surpasses existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-09-17T01:57:45Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.