HHF: Hashing-guided Hinge Function for Deep Hashing Retrieval
- URL: http://arxiv.org/abs/2112.02225v1
- Date: Sat, 4 Dec 2021 03:16:42 GMT
- Title: HHF: Hashing-guided Hinge Function for Deep Hashing Retrieval
- Authors: Chengyin Xu, Zhengzhuo Xu, Zenghao Chai, Hongjia Li, Qiruyi Zuo,
Lingyu Yang and Chun Yuan
- Abstract summary: latent codes extracted by textbfDeep textbfNeural textbfNetwork (DNN) will inevitably lose semantic information during the binarization process.
textbfHashing-guided textbfHinge textbfFunction (HHF) is proposed to avoid such conflict.
In detail, we carefully design a specific inflection point, which relies on the hash bit length and category numbers to balance metric learning and quantization learning.
- Score: 14.35219963508551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep hashing has shown promising performance in large-scale image retrieval.
However, latent codes extracted by \textbf{D}eep \textbf{N}eural
\textbf{N}etwork (DNN) will inevitably lose semantic information during the
binarization process, which damages the retrieval efficiency and make it
challenging. Although many existing approaches perform regularization to
alleviate quantization errors, we figure out an incompatible conflict between
the metric and quantization losses. The metric loss penalizes the inter-class
distances to push different classes unconstrained far away. Worse still, it
tends to map the latent code deviate from ideal binarization point and generate
severe ambiguity in the binarization process. Based on the minimum distance of
the binary linear code, \textbf{H}ashing-guided \textbf{H}inge
\textbf{F}unction (HHF) is proposed to avoid such conflict. In detail, we
carefully design a specific inflection point, which relies on the hash bit
length and category numbers to balance metric learning and quantization
learning. Such a modification prevents the network from falling into local
metric optimal minima in deep hashing. Extensive experiments in CIFAR-10,
CIFAR-100, ImageNet, and MS-COCO show that HHF consistently outperforms
existing techniques, and is robust and flexible to transplant into other
methods.
Related papers
- Sparse-Inductive Generative Adversarial Hashing for Nearest Neighbor
Search [8.020530603813416]
We propose a novel unsupervised hashing method, termed Sparsity-Induced Generative Adversarial Hashing (SiGAH)
SiGAH encodes large-scale high-scale high-dimensional features into binary codes, which solves the two problems through a generative adversarial training framework.
Experimental results on four benchmarks, i.e. Tiny100K, GIST1M, Deep1M, and MNIST, have shown that the proposed SiGAH has superior performance over state-of-the-art approaches.
arXiv Detail & Related papers (2023-06-12T08:07:23Z) - Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing [1.6244541005112747]
Fine-grained hashing is more challenging than traditional hashing problems.
We propose a cascaded network to learn compact and highly semantic hash codes.
We also propose a novel approach to coordinately balance the loss of multi-task learning.
arXiv Detail & Related papers (2023-03-20T17:08:48Z) - FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval [41.125141897096874]
Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
arXiv Detail & Related papers (2021-05-15T03:53:48Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Momentum Uncertainty Hashing [65.27971340060687]
We propose a novel Deep Momentum Uncertainty Hashing (DMUH)
It explicitly estimates the uncertainty during training and leverages the uncertainty information to guide the approximation process.
Our method achieves the best performance on all of the datasets and surpasses existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-09-17T01:57:45Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.