Hard Example Guided Hashing for Image Retrieval
- URL: http://arxiv.org/abs/2112.13565v1
- Date: Mon, 27 Dec 2021 08:24:10 GMT
- Title: Hard Example Guided Hashing for Image Retrieval
- Authors: Hai Su, Meiyin Han, Junle Liang, Jun Liang, Songsen Yu
- Abstract summary: It exists two main factors affecting the ability of learning hard examples, which are weak key features extraction and the shortage of hard examples.
In this paper, we give a novel end-to-end model to extract the key feature from hard examples and obtain hash code with the accurate semantic information.
Experimental results on CIFAR-10 and NUS-WIDE demonstrate that our model outperformances the mainstream hashing-based image retrieval methods.
- Score: 3.606866431185676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared with the traditional hashing methods, deep hashing methods generate
hash codes with rich semantic information and greatly improves the performances
in the image retrieval field. However, it is unsatisfied for current deep
hashing methods to predict the similarity of hard examples. It exists two main
factors affecting the ability of learning hard examples, which are weak key
features extraction and the shortage of hard examples. In this paper, we give a
novel end-to-end model to extract the key feature from hard examples and obtain
hash code with the accurate semantic information. In addition, we redesign a
hard pair-wise loss function to assess the hard degree and update penalty
weights of examples. It effectively alleviates the shortage problem in hard
examples. Experimental results on CIFAR-10 and NUS-WIDE demonstrate that our
model outperformances the mainstream hashing-based image retrieval methods.
Related papers
- Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing [1.6244541005112747]
Fine-grained hashing is more challenging than traditional hashing problems.
We propose a cascaded network to learn compact and highly semantic hash codes.
We also propose a novel approach to coordinately balance the loss of multi-task learning.
arXiv Detail & Related papers (2023-03-20T17:08:48Z) - Weighted Contrastive Hashing [11.14153532458873]
Unsupervised hash development has been hampered by insufficient data similarity mining based on global-only image representations.
We introduce a novel mutual attention module to alleviate the problem of information asymmetry in network features caused by the missing image structure.
The aggregated weighted similarities, which reflect the deep image relations, are distilled to facilitate the hash codes learning with a distillation loss.
arXiv Detail & Related papers (2022-09-28T13:47:33Z) - A Simple Hash-Based Early Exiting Approach For Language Understanding
and Generation [77.85086491395981]
Early exiting allows instances to exit at different layers according to the estimation of difficulty.
We propose a Hash-based Early Exiting approach (HashEE) that replaces the learn-to-exit modules with hash functions to assign each token to a fixed exiting layer.
Experimental results on classification, regression, and generation tasks demonstrate that HashEE can achieve higher performance with fewer FLOPs and inference time.
arXiv Detail & Related papers (2022-03-03T12:02:05Z) - Self-Distilled Hashing for Deep Image Retrieval [25.645550298697938]
In hash-based image retrieval systems, transformed input from the original usually generates different codes.
We propose a novel self-distilled hashing scheme to minimize the discrepancy while exploiting the potential of augmented data.
We also introduce hash proxy-based similarity learning and binary cross entropy-based quantization loss to provide fine quality hash codes.
arXiv Detail & Related papers (2021-12-16T12:01:50Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - Image Hashing by Minimizing Discrete Component-wise Wasserstein Distance [12.968141477410597]
Adversarial autoencoders are shown to be able to implicitly learn a robust, locality-preserving hash function that generates balanced and high-quality hash codes.
The existing adversarial hashing methods are inefficient to be employed for large-scale image retrieval applications.
We propose a new adversarial-autoencoder hashing approach that has a much lower sample requirement and computational cost.
arXiv Detail & Related papers (2020-02-29T00:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.