FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval
- URL: http://arxiv.org/abs/2105.07128v1
- Date: Sat, 15 May 2021 03:53:48 GMT
- Title: FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval
- Authors: Xin Liu, Xingzhi Wang and Yiu-ming Cheung
- Abstract summary: Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
- Score: 41.125141897096874
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Cross-modal hashing, favored for its effectiveness and efficiency, has
received wide attention to facilitating efficient retrieval across different
modalities. Nevertheless, most existing methods do not sufficiently exploit the
discriminative power of semantic information when learning the hash codes,
while often involving time-consuming training procedure for handling the
large-scale dataset. To tackle these issues, we formulate the learning of
similarity-preserving hash codes in terms of orthogonally rotating the semantic
data so as to minimize the quantization loss of mapping such data to hamming
space, and propose an efficient Fast Discriminative Discrete Hashing (FDDH)
approach for large-scale cross-modal retrieval. More specifically, FDDH
introduces an orthogonal basis to regress the targeted hash codes of training
examples to their corresponding semantic labels, and utilizes "-dragging
technique to provide provable large semantic margins. Accordingly, the
discriminative power of semantic information can be explicitly captured and
maximized. Moreover, an orthogonal transformation scheme is further proposed to
map the nonlinear embedding data into the semantic subspace, which can well
guarantee the semantic consistency between the data feature and its semantic
representation. Consequently, an efficient closed form solution is derived for
discriminative hash code learning, which is very computationally efficient. In
addition, an effective and stable online learning strategy is presented for
optimizing modality-specific projection functions, featuring adaptivity to
different training sizes and streaming data. The proposed FDDH approach
theoretically approximates the bi-Lipschitz continuity, runs sufficiently fast,
and also significantly improves the retrieval performance over the
state-of-the-art methods. The source code is released at:
https://github.com/starxliu/FDDH.
Related papers
- Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Self-supervised asymmetric deep hashing with margin-scalable constraint
for image retrieval [3.611160663701664]
We propose a novel self-supervised asymmetric deep hashing method with a margin-scalable constraint(SADH) approach for image retrieval.
SADH implements a self-supervised network to preserve semantic information in a semantic feature map and a semantic code map for the semantics of the given dataset.
For the feature learning part, a new margin-scalable constraint is employed for both highly-accurate construction of pairwise correlations in the hamming space and a more discriminative hash code representation.
arXiv Detail & Related papers (2020-12-07T16:09:37Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Making Online Sketching Hashing Even Faster [63.16042585506435]
We present a FasteR Online Sketching Hashing (FROSH) algorithm to sketch the data in a more compact form via an independent transformation.
We provide theoretical justification to guarantee that our proposed FROSH consumes less time and achieves a comparable sketching precision.
We also extend FROSH to its distributed implementation, namely DFROSH, to further reduce the training time cost of FROSH.
arXiv Detail & Related papers (2020-10-10T08:50:53Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Task-adaptive Asymmetric Deep Cross-modal Hashing [20.399984971442]
Cross-modal hashing aims to embed semantic correlations of heterogeneous modality data into the binary hash codes with discriminative semantic labels.
We present a Task-adaptive Asymmetric Deep Cross-modal Hashing (TA-ADCMH) method in this paper.
It can learn task-adaptive hash functions for two sub-retrieval tasks via simultaneous modality representation and asymmetric hash learning.
arXiv Detail & Related papers (2020-04-01T02:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.