DVHN: A Deep Hashing Framework for Large-scale Vehicle Re-identification
- URL: http://arxiv.org/abs/2112.04937v1
- Date: Thu, 9 Dec 2021 14:11:27 GMT
- Title: DVHN: A Deep Hashing Framework for Large-scale Vehicle Re-identification
- Authors: Yongbiao Chen, Sheng Zhang, Fangxin Liu, Chenggang Wu, Kaicheng Guo,
Zhengwei Qi
- Abstract summary: We propose a deep hash-based vehicle re-identification framework, dubbed DVHN, which substantially reduces memory usage and promotes retrieval efficiency.
DVHN directly learns discrete compact binary hash codes for each image by jointly optimizing the feature learning network and the hash code generating module.
textbfDVHN of $2048$ bits can achieve 13.94% and 10.21% accuracy improvement in terms of textbfmAP and textbfRank@1 for textbfVehicleID (800) dataset.
- Score: 5.407157027628579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we make the very first attempt to investigate the integration
of deep hash learning with vehicle re-identification. We propose a deep
hash-based vehicle re-identification framework, dubbed DVHN, which
substantially reduces memory usage and promotes retrieval efficiency while
reserving nearest neighbor search accuracy. Concretely,~DVHN directly learns
discrete compact binary hash codes for each image by jointly optimizing the
feature learning network and the hash code generating module. Specifically, we
directly constrain the output from the convolutional neural network to be
discrete binary codes and ensure the learned binary codes are optimal for
classification. To optimize the deep discrete hashing framework, we further
propose an alternating minimization method for learning binary
similarity-preserved hashing codes. Extensive experiments on two widely-studied
vehicle re-identification datasets- \textbf{VehicleID} and \textbf{VeRi}-~have
demonstrated the superiority of our method against the state-of-the-art deep
hash methods. \textbf{DVHN} of $2048$ bits can achieve 13.94\% and 10.21\%
accuracy improvement in terms of \textbf{mAP} and \textbf{Rank@1} for
\textbf{VehicleID (800)} dataset. For \textbf{VeRi}, we achieve 35.45\% and
32.72\% performance gains for \textbf{Rank@1} and \textbf{mAP}, respectively.
Related papers
- Deep Self-Adaptive Hashing for Image Retrieval [16.768754022585057]
We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
arXiv Detail & Related papers (2021-08-16T13:53:20Z) - TransHash: Transformer-based Hamming Hashing for Efficient Image
Retrieval [0.0]
We present textbfTranshash, a pure transformer-based framework for deep hashing learning.
We achieve 8.2%, 2.6%, 12.7% performance gains in terms of average textitmAP for different hash bit lengths on three public datasets.
arXiv Detail & Related papers (2021-05-05T01:35:53Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.