Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings
- URL: http://arxiv.org/abs/2007.13912v1
- Date: Mon, 27 Jul 2020 23:47:43 GMT
- Title: Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings
- Authors: Pedro Morgado, Yunsheng Li, Jose Costa Pereira, Mohammad Saberian and
Nuno Vasconcelos
- Abstract summary: Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
- Score: 65.36757931982469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image hash codes are produced by binarizing the embeddings of convolutional
neural networks (CNN) trained for either classification or retrieval. While
proxy embeddings achieve good performance on both tasks, they are non-trivial
to binarize, due to a rotational ambiguity that encourages non-binary
embeddings. The use of a fixed set of proxies (weights of the CNN
classification layer) is proposed to eliminate this ambiguity, and a procedure
to design proxy sets that are nearly optimal for both classification and
hashing is introduced. The resulting hash-consistent large margin (HCLM)
proxies are shown to encourage saturation of hashing units, thus guaranteeing a
small binarization error, while producing highly discriminative hash-codes. A
semantic extension (sHCLM), aimed to improve hashing performance in a transfer
scenario, is also proposed. Extensive experiments show that sHCLM embeddings
achieve significant improvements over state-of-the-art hashing procedures on
several small and large datasets, both within and beyond the set of training
classes.
Related papers
- Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval [26.17466361744519]
Adversarial examples pose a security threat to deep hashing models.
Adversarial examples fabricated by maximizing the Hamming distance between the hash codes of adversarial samples and mainstay features.
For the first time, we formulate the formalized adversarial training of deep hashing into a unified minimax structure.
arXiv Detail & Related papers (2023-10-23T07:21:40Z) - Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing [1.6244541005112747]
Fine-grained hashing is more challenging than traditional hashing problems.
We propose a cascaded network to learn compact and highly semantic hash codes.
We also propose a novel approach to coordinately balance the loss of multi-task learning.
arXiv Detail & Related papers (2023-03-20T17:08:48Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval [41.125141897096874]
Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
arXiv Detail & Related papers (2021-05-15T03:53:48Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.