Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised
Semantic Hashing
- URL: http://arxiv.org/abs/2403.06071v1
- Date: Sun, 10 Mar 2024 03:33:59 GMT
- Title: Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised
Semantic Hashing
- Authors: Liyang He, Zhenya Huang, Jiayu Liu, Enhong Chen, Fei Wang, Jing Sha,
Shijin Wang
- Abstract summary: We propose an innovative Bit-mask Robust Contrastive knowledge Distillation (BRCD) method for semantic hashing.
BRCD is specifically devised for the distillation of semantic hashing models.
- Score: 71.47723696190184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised semantic hashing has emerged as an indispensable technique for
fast image search, which aims to convert images into binary hash codes without
relying on labels. Recent advancements in the field demonstrate that employing
large-scale backbones (e.g., ViT) in unsupervised semantic hashing models can
yield substantial improvements. However, the inference delay has become
increasingly difficult to overlook. Knowledge distillation provides a means for
practical model compression to alleviate this delay. Nevertheless, the
prevailing knowledge distillation approaches are not explicitly designed for
semantic hashing. They ignore the unique search paradigm of semantic hashing,
the inherent necessities of the distillation process, and the property of hash
codes. In this paper, we propose an innovative Bit-mask Robust Contrastive
knowledge Distillation (BRCD) method, specifically devised for the distillation
of semantic hashing models. To ensure the effectiveness of two kinds of search
paradigms in the context of semantic hashing, BRCD first aligns the semantic
spaces between the teacher and student models through a contrastive knowledge
distillation objective. Additionally, to eliminate noisy augmentations and
ensure robust optimization, a cluster-based method within the knowledge
distillation process is introduced. Furthermore, through a bit-level analysis,
we uncover the presence of redundancy bits resulting from the bit independence
property. To mitigate these effects, we introduce a bit mask mechanism in our
knowledge distillation objective. Finally, extensive experiments not only
showcase the noteworthy performance of our BRCD method in comparison to other
knowledge distillation methods but also substantiate the generality of our
methods across diverse semantic hashing models and backbones. The code for BRCD
is available at https://github.com/hly1998/BRCD.
Related papers
- Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval [26.17466361744519]
Adversarial examples pose a security threat to deep hashing models.
Adversarial examples fabricated by maximizing the Hamming distance between the hash codes of adversarial samples and mainstay features.
For the first time, we formulate the formalized adversarial training of deep hashing into a unified minimax structure.
arXiv Detail & Related papers (2023-10-23T07:21:40Z) - Weighted Contrastive Hashing [11.14153532458873]
Unsupervised hash development has been hampered by insufficient data similarity mining based on global-only image representations.
We introduce a novel mutual attention module to alleviate the problem of information asymmetry in network features caused by the missing image structure.
The aggregated weighted similarities, which reflect the deep image relations, are distilled to facilitate the hash codes learning with a distillation loss.
arXiv Detail & Related papers (2022-09-28T13:47:33Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Dual-level Semantic Transfer Deep Hashing for Efficient Social Image
Retrieval [35.78137004253608]
Social network stores and disseminates a tremendous amount of user shared images.
Deep hashing is an efficient indexing technique to support large-scale social image retrieval.
Existing methods suffer from severe semantic shortage when optimizing a large amount of deep neural network parameters.
We propose a Dual-level Semantic Transfer Deep Hashing (DSTDH) method to alleviate this problem.
arXiv Detail & Related papers (2020-06-10T01:03:09Z) - Procrustean Orthogonal Sparse Hashing [3.302605292858623]
We show that insect olfaction is structurally and functionally analogous to sparse hashing.
We present a novel method, Procrustean Orthogonal Sparse Hashing (POSH), that unifies these findings.
We propose two new methods, Binary OSL and SphericalHash, to address these deficiencies.
arXiv Detail & Related papers (2020-06-08T18:09:33Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.