Deep Self-Adaptive Hashing for Image Retrieval
- URL: http://arxiv.org/abs/2108.07094v1
- Date: Mon, 16 Aug 2021 13:53:20 GMT
- Title: Deep Self-Adaptive Hashing for Image Retrieval
- Authors: Qinghong Lin, Xiaojun Chen, Qin Zhang, Shangxuan Tian, Yudong Chen
- Abstract summary: We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
- Score: 16.768754022585057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hashing technology has been widely used in image retrieval due to its
computational and storage efficiency. Recently, deep unsupervised hashing
methods have attracted increasing attention due to the high cost of human
annotations in the real world and the superiority of deep learning technology.
However, most deep unsupervised hashing methods usually pre-compute a
similarity matrix to model the pairwise relationship in the pre-trained feature
space. Then this similarity matrix would be used to guide hash learning, in
which most of the data pairs are treated equivalently. The above process is
confronted with the following defects: 1) The pre-computed similarity matrix is
inalterable and disconnected from the hash learning process, which cannot
explore the underlying semantic information. 2) The informative data pairs may
be buried by the large number of less-informative data pairs. To solve the
aforementioned problems, we propose a \textbf{Deep Self-Adaptive
Hashing~(DSAH)} model to adaptively capture the semantic information with two
special designs: \textbf{Adaptive Neighbor Discovery~(AND)} and
\textbf{Pairwise Information Content~(PIC)}. Firstly, we adopt the AND to
initially construct a neighborhood-based similarity matrix, and then refine
this initial similarity matrix with a novel update strategy to further
investigate the semantic structure behind the learned representation. Secondly,
we measure the priorities of data pairs with PIC and assign adaptive weights to
them, which is relies on the assumption that more dissimilar data pairs contain
more discriminative information for hash learning. Extensive experiments on
several benchmark datasets demonstrate that the above two technologies
facilitate the deep hashing model to achieve superior performance in a
self-adaptive manner.
Related papers
- RREH: Reconstruction Relations Embedded Hashing for Semi-Paired Cross-Modal Retrieval [32.06421737874828]
Reconstruction Relations Embedded Hashing (RREH) is designed for semi-paired cross-modal retrieval tasks.
RREH assumes that multi-modal data share a common subspace.
anchors are sampled from paired data, which improves the efficiency of hash learning.
arXiv Detail & Related papers (2024-05-28T03:12:54Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Unsupervised Hashing with Contrastive Information Bottleneck [39.607741586731336]
We propose to adapt a framework to learn binary hashing codes.
Specifically, we first propose to modify the objective function to meet the specific requirement of hashing.
We then introduce a probabilistic binary representation layer into the model to facilitate end-to-end training.
arXiv Detail & Related papers (2021-05-13T08:30:16Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z) - A Novel Incremental Cross-Modal Hashing Approach [21.99741793652628]
We propose a novel incremental cross-modal hashing algorithm termed "iCMH"
The proposed approach consists of two sequential stages, namely, learning the hash codes and training the hash functions.
Experiments across a variety of cross-modal datasets and comparisons with state-of-the-art cross-modal algorithms shows the usefulness of our approach.
arXiv Detail & Related papers (2020-02-03T12:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.