A Novel Incremental Cross-Modal Hashing Approach
- URL: http://arxiv.org/abs/2002.00677v1
- Date: Mon, 3 Feb 2020 12:34:56 GMT
- Title: A Novel Incremental Cross-Modal Hashing Approach
- Authors: Devraj Mandal, Soma Biswas
- Abstract summary: We propose a novel incremental cross-modal hashing algorithm termed "iCMH"
The proposed approach consists of two sequential stages, namely, learning the hash codes and training the hash functions.
Experiments across a variety of cross-modal datasets and comparisons with state-of-the-art cross-modal algorithms shows the usefulness of our approach.
- Score: 21.99741793652628
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Cross-modal retrieval deals with retrieving relevant items from one modality,
when provided with a search query from another modality. Hashing techniques,
where the data is represented as binary bits have specifically gained
importance due to the ease of storage, fast computations and high accuracy. In
real world, the number of data categories is continuously increasing, which
requires algorithms capable of handling this dynamic scenario. In this work, we
propose a novel incremental cross-modal hashing algorithm termed "iCMH", which
can adapt itself to handle incoming data of new categories. The proposed
approach consists of two sequential stages, namely, learning the hash codes and
training the hash functions. At every stage, a small amount of old category
data termed "exemplars" is is used so as not to forget the old data while
trying to learn for the new incoming data, i.e. to avoid catastrophic
forgetting. In the first stage, the hash codes for the exemplars is used, and
simultaneously, hash codes for the new data is computed such that it maintains
the semantic relations with the existing data. For the second stage, we propose
both a non-deep and deep architectures to learn the hash functions effectively.
Extensive experiments across a variety of cross-modal datasets and comparisons
with state-of-the-art cross-modal algorithms shows the usefulness of our
approach.
Related papers
- Deep Lifelong Cross-modal Hashing [17.278818467305683]
We propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly.
Specifically, we design lifelong learning strategy to update hash functions by directly training the incremental data instead of retraining new hash functions using all the accumulated data.
It yields substantial average over 20% in retrieval accuracy and almost reduces over 80% training time when new data arrives continuously.
arXiv Detail & Related papers (2023-04-26T07:56:22Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Efficient Cross-Modal Retrieval via Deep Binary Hashing and Quantization [5.799838997511804]
Cross-modal retrieval aims to search for data with similar semantic meanings across different content modalities.
We propose a jointly learned deep hashing and quantization network (HQ) for cross-modal retrieval.
Experimental results on the NUS-WIDE, MIR-Flickr, and Amazon datasets demonstrate that HQ achieves boosts of more than 7% in precision.
arXiv Detail & Related papers (2022-02-15T22:00:04Z) - MOON: Multi-Hash Codes Joint Learning for Cross-Media Retrieval [30.77157852327981]
Cross-media hashing technique has attracted increasing attention for its high computation efficiency and low storage cost.
We develop a novel Multiple hash cOdes jOint learNing method (MOON) for cross-media retrieval.
arXiv Detail & Related papers (2021-08-17T14:47:47Z) - Deep Self-Adaptive Hashing for Image Retrieval [16.768754022585057]
We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
arXiv Detail & Related papers (2021-08-16T13:53:20Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Unsupervised Semantic Hashing with Pairwise Reconstruction [22.641786533525245]
We present Pairwise Reconstruction (PairRec), which is a discrete variational autoencoder based hashing model.
We experimentally compare PairRec to traditional and state-of-the-art approaches, and obtain significant performance improvements in the task of document similarity search.
arXiv Detail & Related papers (2020-07-01T10:54:27Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - A Survey on Deep Hashing Methods [52.326472103233854]
Nearest neighbor search aims to obtain the samples in the database with the smallest distances from them to the queries.
With the development of deep learning, deep hashing methods show more advantages than traditional methods.
Deep supervised hashing is categorized into pairwise methods, ranking-based methods, pointwise methods and quantization.
Deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods.
arXiv Detail & Related papers (2020-03-04T08:25:15Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.