Deep Momentum Uncertainty Hashing
- URL: http://arxiv.org/abs/2009.08012v3
- Date: Tue, 13 Jul 2021 07:25:50 GMT
- Title: Deep Momentum Uncertainty Hashing
- Authors: Chaoyou Fu, Guoli Wang, Xiang Wu, Qian Zhang, Ran He
- Abstract summary: We propose a novel Deep Momentum Uncertainty Hashing (DMUH)
It explicitly estimates the uncertainty during training and leverages the uncertainty information to guide the approximation process.
Our method achieves the best performance on all of the datasets and surpasses existing state-of-the-art methods by a large margin.
- Score: 65.27971340060687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization (CO) has been a hot research topic because of its
theoretic and practical importance. As a classic CO problem, deep hashing aims
to find an optimal code for each data from finite discrete possibilities, while
the discrete nature brings a big challenge to the optimization process.
Previous methods usually mitigate this challenge by binary approximation,
substituting binary codes for real-values via activation functions or
regularizations. However, such approximation leads to uncertainty between
real-values and binary ones, degrading retrieval performance. In this paper, we
propose a novel Deep Momentum Uncertainty Hashing (DMUH). It explicitly
estimates the uncertainty during training and leverages the uncertainty
information to guide the approximation process. Specifically, we model
bit-level uncertainty via measuring the discrepancy between the output of a
hashing network and that of a momentum-updated network. The discrepancy of each
bit indicates the uncertainty of the hashing network to the approximate output
of that bit. Meanwhile, the mean discrepancy of all bits in a hashing code can
be regarded as image-level uncertainty. It embodies the uncertainty of the
hashing network to the corresponding input image. The hashing bit and image
with higher uncertainty are paid more attention during optimization. To the
best of our knowledge, this is the first work to study the uncertainty in
hashing bits. Extensive experiments are conducted on four datasets to verify
the superiority of our method, including CIFAR-10, NUS-WIDE, MS-COCO, and a
million-scale dataset Clothing1M. Our method achieves the best performance on
all of the datasets and surpasses existing state-of-the-art methods by a large
margin.
Related papers
- Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning [18.419742575630217]
This paper introduces a novel algorithm based on H"older Divergence (HD) to enhance the reliability of multi-view learning.
Through the Dempster-Shafer theory, integration of uncertainty from different modalities, thereby generating a comprehensive result.
Mathematically, HD proves to better measure the distance'' between real data distribution and predictive distribution of the model.
arXiv Detail & Related papers (2024-10-29T04:29:44Z) - HHF: Hashing-guided Hinge Function for Deep Hashing Retrieval [14.35219963508551]
latent codes extracted by textbfDeep textbfNeural textbfNetwork (DNN) will inevitably lose semantic information during the binarization process.
textbfHashing-guided textbfHinge textbfFunction (HHF) is proposed to avoid such conflict.
In detail, we carefully design a specific inflection point, which relies on the hash bit length and category numbers to balance metric learning and quantization learning.
arXiv Detail & Related papers (2021-12-04T03:16:42Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Shuffle and Learn: Minimizing Mutual Information for Unsupervised
Hashing [4.518427368603235]
Unsupervised binary representation allows fast data retrieval without any annotations.
Conflicts in binary space are one of the major barriers to high-performance unsupervised hashing.
New relaxation method called Shuffle and Learn is proposed to tackle code conflicts in the unsupervised hash.
arXiv Detail & Related papers (2020-11-20T07:14:55Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Image Hashing by Minimizing Discrete Component-wise Wasserstein Distance [12.968141477410597]
Adversarial autoencoders are shown to be able to implicitly learn a robust, locality-preserving hash function that generates balanced and high-quality hash codes.
The existing adversarial hashing methods are inefficient to be employed for large-scale image retrieval applications.
We propose a new adversarial-autoencoder hashing approach that has a much lower sample requirement and computational cost.
arXiv Detail & Related papers (2020-02-29T00:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.