Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization
- URL: http://arxiv.org/abs/2110.12478v1
- Date: Sun, 24 Oct 2021 16:14:36 GMT
- Title: Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization
- Authors: Jianglin Lu, Hailing Wang, Jie Zhou, Mengfan Yan, Jiajun Wen
- Abstract summary: We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
- Score: 9.539842235137376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep hashing methods have been widely used in image retrieval task.
Most existing deep hashing approaches adopt one-to-one quantization to reduce
information loss. However, such class-unrelated quantization cannot give
discriminative feedback for network training. In addition, these methods only
utilize single label to integrate supervision information of data for hashing
function learning, which may result in inferior network generalization
performance and relatively low-quality hash codes since the inter-class
information of data is totally ignored. In this paper, we propose a dual
semantic asymmetric hashing (DSAH) method, which generates discriminative hash
codes under three-fold constrains. Firstly, DSAH utilizes class prior to
conduct class structure quantization so as to transmit class information during
the quantization process. Secondly, a simple yet effective label mechanism is
designed to characterize both the intra-class compactness and inter-class
separability of data, thereby achieving semantic-sensitive binary code
learning. Finally, a meaningful pairwise similarity preserving loss is devised
to minimize the distances between class-related network outputs based on an
affinity graph. With these three main components, high-quality hash codes can
be generated through network. Extensive experiments conducted on various
datasets demonstrate the superiority of DSAH in comparison with
state-of-the-art deep hashing methods.
Related papers
- Prototypical Hash Encoding for On-the-Fly Fine-Grained Category Discovery [65.16724941038052]
Category-aware Prototype Generation (CPG) and Discrimi Category 5.3% (DCE) are proposed.
CPG enables the model to fully capture the intra-category diversity by representing each category with multiple prototypes.
DCE boosts the discrimination ability of hash code with the guidance of the generated category prototypes.
arXiv Detail & Related papers (2024-10-24T23:51:40Z) - Deep Self-Adaptive Hashing for Image Retrieval [16.768754022585057]
We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
arXiv Detail & Related papers (2021-08-16T13:53:20Z) - Self-supervised asymmetric deep hashing with margin-scalable constraint
for image retrieval [3.611160663701664]
We propose a novel self-supervised asymmetric deep hashing method with a margin-scalable constraint(SADH) approach for image retrieval.
SADH implements a self-supervised network to preserve semantic information in a semantic feature map and a semantic code map for the semantics of the given dataset.
For the feature learning part, a new margin-scalable constraint is employed for both highly-accurate construction of pairwise correlations in the hamming space and a more discriminative hash code representation.
arXiv Detail & Related papers (2020-12-07T16:09:37Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Learning to hash with semantic similarity metrics and empirical KL
divergence [3.04585143845864]
Learning to hash is an efficient paradigm for exact and approximate nearest neighbor search from massive databases.
Binary hash codes are typically extracted from an image by rounding output features from a CNN, which is trained on a supervised binary similar/ dissimilar task.
We overcome (i) via a novel loss function encouraging the relative hash code distances of learned features to match those derived from their targets.
We address (ii) via a differentiable estimate of the KL divergence between network outputs and a binary target distribution, resulting in minimal information loss when the features are rounded to binary.
arXiv Detail & Related papers (2020-05-11T08:20:26Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.