CoopHash: Cooperative Learning of Multipurpose Descriptor and Contrastive Pair Generator via Variational MCMC Teaching for Supervised Image Hashing
- URL: http://arxiv.org/abs/2210.04288v4
- Date: Wed, 12 Jun 2024 12:37:43 GMT
- Title: CoopHash: Cooperative Learning of Multipurpose Descriptor and Contrastive Pair Generator via Variational MCMC Teaching for Supervised Image Hashing
- Authors: Khoa D. Doan, Jianwen Xie, Yaxuan Zhu, Yang Zhao, Ping Li,
- Abstract summary: generative models, such as Generative Adversarial Networks (GANs), can generate synthetic data in an image hashing model.
GANs are difficult to train, which prevents hashing approaches from jointly training the generative models and the hash functions.
We propose a novel framework, the generative cooperative hashing network, which is based on energy-based cooperative learning.
- Score: 42.67510119856105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leveraging supervised information can lead to superior retrieval performance in the image hashing domain but the performance degrades significantly without enough labeled data. One effective solution to boost performance is to employ generative models, such as Generative Adversarial Networks (GANs), to generate synthetic data in an image hashing model. However, GAN-based methods are difficult to train, which prevents the hashing approaches from jointly training the generative models and the hash functions. This limitation results in sub-optimal retrieval performance. To overcome this limitation, we propose a novel framework, the generative cooperative hashing network, which is based on energy-based cooperative learning. This framework jointly learns a powerful generative representation of the data and a robust hash function via two components: a top-down contrastive pair generator that synthesizes contrastive images and a bottom-up multipurpose descriptor that simultaneously represents the images from multiple perspectives, including probability density, hash code, latent code, and category. The two components are jointly learned via a novel likelihood-based cooperative learning scheme. We conduct experiments on several real-world datasets and show that the proposed method outperforms the competing hashing supervised methods, achieving up to 10\% relative improvement over the current state-of-the-art supervised hashing methods, and exhibits a significantly better performance in out-of-distribution retrieval.
Related papers
- HybridHash: Hybrid Convolutional and Self-Attention Deep Hashing for Image Retrieval [0.3880517371454968]
We propose a hybrid convolutional and self-attention deep hashing method known as HybridHash.
We have conducted comprehensive experiments on three widely used datasets: CIFAR-10, NUS-WIDE and IMAGENET.
The experimental results demonstrate that the method proposed in this paper has superior performance with respect to state-of-the-art deep hashing methods.
arXiv Detail & Related papers (2024-05-13T07:45:20Z) - Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing [1.6244541005112747]
Fine-grained hashing is more challenging than traditional hashing problems.
We propose a cascaded network to learn compact and highly semantic hash codes.
We also propose a novel approach to coordinately balance the loss of multi-task learning.
arXiv Detail & Related papers (2023-03-20T17:08:48Z) - Unsupervised Hashing with Contrastive Information Bottleneck [39.607741586731336]
We propose to adapt a framework to learn binary hashing codes.
Specifically, we first propose to modify the objective function to meet the specific requirement of hashing.
We then introduce a probabilistic binary representation layer into the model to facilitate end-to-end training.
arXiv Detail & Related papers (2021-05-13T08:30:16Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image
Retrieval [43.41089241581596]
We study the novel fine-grained hashing topic to generate compact binary codes for fine-grained images.
We propose a unified end-to-end trainable network, termed as ExchNet.
Our proposal consistently outperforms state-of-the-art generic hashing methods on five fine-grained datasets.
arXiv Detail & Related papers (2020-08-04T07:01:32Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Generative Semantic Hashing Enhanced via Boltzmann Machines [61.688380278649056]
Existing generative-hashing methods mostly assume a factorized form for the posterior distribution.
We propose to employ the distribution of Boltzmann machine as the retrievalal posterior.
We show that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.
arXiv Detail & Related papers (2020-06-16T01:23:39Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Deep Multi-View Enhancement Hashing for Image Retrieval [40.974719473643724]
This paper proposes a supervised multi-view hash model which can enhance the multi-view information through neural networks.
The proposed method is systematically evaluated on the CIFAR-10, NUS-WIDE and MS-COCO datasets.
arXiv Detail & Related papers (2020-02-01T08:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.