Improved Deep Classwise Hashing With Centers Similarity Learning for
Image Retrieval
- URL: http://arxiv.org/abs/2103.09442v1
- Date: Wed, 17 Mar 2021 05:01:13 GMT
- Title: Improved Deep Classwise Hashing With Centers Similarity Learning for
Image Retrieval
- Authors: Ming Zhang, Hong Yan
- Abstract summary: We propose an improved deep classwise hashing, which enables hashing learning and class centers learning simultaneously.
The proposed method effectively surpasses the original method and outperforms state-of-the-art baselines.
- Score: 19.052163348920512
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep supervised hashing for image retrieval has attracted researchers'
attention due to its high efficiency and superior retrieval performance. Most
existing deep supervised hashing works, which are based on pairwise/triplet
labels, suffer from the expensive computational cost and insufficient
utilization of the semantics information. Recently, deep classwise hashing
introduced a classwise loss supervised by class labels information
alternatively; however, we find it still has its drawback. In this paper, we
propose an improved deep classwise hashing, which enables hashing learning and
class centers learning simultaneously. Specifically, we design a two-step
strategy on center similarity learning. It interacts with the classwise loss to
attract the class center to concentrate on the intra-class samples while
pushing other class centers as far as possible. The centers similarity learning
contributes to generating more compact and discriminative hashing codes. We
conduct experiments on three benchmark datasets. It shows that the proposed
method effectively surpasses the original method and outperforms
state-of-the-art baselines under various commonly-used evaluation metrics for
image retrieval.
Related papers
- CgAT: Center-Guided Adversarial Training for Deep Hashing-Based
Retrieval [12.421908811085627]
We present a min-max based Center-guided Adversarial Training, namely CgAT, to improve the iteration of deep hashing networks.
CgAT learns to mitigate the effects of adversarial samples by minimizing the Hamming distance to the center codes.
Compared with the current state-of-the-art defense method, we significantly improve the defense performance by an average of 18.61%.
arXiv Detail & Related papers (2022-04-18T04:51:08Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Instance-weighted Central Similarity for Multi-label Image Retrieval [66.23348499938278]
We propose Instance-weighted Central Similarity (ICS) to automatically learn the center weight corresponding to a hash code.
Our method achieves the state-of-the-art performance on the image retrieval benchmarks, and especially improves the mAP by 1.6%-6.4% on the MS COCO dataset.
arXiv Detail & Related papers (2021-08-11T15:18:18Z) - Self-supervised asymmetric deep hashing with margin-scalable constraint
for image retrieval [3.611160663701664]
We propose a novel self-supervised asymmetric deep hashing method with a margin-scalable constraint(SADH) approach for image retrieval.
SADH implements a self-supervised network to preserve semantic information in a semantic feature map and a semantic code map for the semantics of the given dataset.
For the feature learning part, a new margin-scalable constraint is employed for both highly-accurate construction of pairwise correlations in the hamming space and a more discriminative hash code representation.
arXiv Detail & Related papers (2020-12-07T16:09:37Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - A Survey on Deep Hashing Methods [52.326472103233854]
Nearest neighbor search aims to obtain the samples in the database with the smallest distances from them to the queries.
With the development of deep learning, deep hashing methods show more advantages than traditional methods.
Deep supervised hashing is categorized into pairwise methods, ranking-based methods, pointwise methods and quantization.
Deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods.
arXiv Detail & Related papers (2020-03-04T08:25:15Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.