Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing
- URL: http://arxiv.org/abs/2303.11274v1
- Date: Mon, 20 Mar 2023 17:08:48 GMT
- Title: Cascading Hierarchical Networks with Multi-task Balanced Loss for
Fine-grained hashing
- Authors: Xianxian Zeng, Yanjun Zheng
- Abstract summary: Fine-grained hashing is more challenging than traditional hashing problems.
We propose a cascaded network to learn compact and highly semantic hash codes.
We also propose a novel approach to coordinately balance the loss of multi-task learning.
- Score: 1.6244541005112747
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the explosive growth in the number of fine-grained images in the
Internet era, it has become a challenging problem to perform fast and efficient
retrieval from large-scale fine-grained images. Among the many retrieval
methods, hashing methods are widely used due to their high efficiency and small
storage space occupation. Fine-grained hashing is more challenging than
traditional hashing problems due to the difficulties such as low inter-class
variances and high intra-class variances caused by the characteristics of
fine-grained images. To improve the retrieval accuracy of fine-grained hashing,
we propose a cascaded network to learn compact and highly semantic hash codes,
and introduce an attention-guided data augmentation method. We refer to this
network as a cascaded hierarchical data augmentation network. We also propose a
novel approach to coordinately balance the loss of multi-task learning. We do
extensive experiments on some common fine-grained visual classification
datasets. The experimental results demonstrate that our proposed method
outperforms several state-of-art hashing methods and can effectively improve
the accuracy of fine-grained retrieval. The source code is publicly available:
https://github.com/kaiba007/FG-CNET.
Related papers
- CoopHash: Cooperative Learning of Multipurpose Descriptor and Contrastive Pair Generator via Variational MCMC Teaching for Supervised Image Hashing [42.67510119856105]
generative models, such as Generative Adversarial Networks (GANs), can generate synthetic data in an image hashing model.
GANs are difficult to train, which prevents hashing approaches from jointly training the generative models and the hash functions.
We propose a novel framework, the generative cooperative hashing network, which is based on energy-based cooperative learning.
arXiv Detail & Related papers (2022-10-09T15:42:36Z) - PHPQ: Pyramid Hybrid Pooling Quantization for Efficient Fine-Grained
Image Retrieval [68.05570413133462]
We propose a Pyramid Hybrid Pooling Quantization (PHPQ) module to capture and preserve fine-grained semantic information from multi-level features.
Experiments on two widely-used public benchmarks, CUB-200-2011 and Stanford Dogs, demonstrate that PHPQ outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-09-11T07:21:02Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - ExchNet: A Unified Hashing Network for Large-Scale Fine-Grained Image
Retrieval [43.41089241581596]
We study the novel fine-grained hashing topic to generate compact binary codes for fine-grained images.
We propose a unified end-to-end trainable network, termed as ExchNet.
Our proposal consistently outperforms state-of-the-art generic hashing methods on five fine-grained datasets.
arXiv Detail & Related papers (2020-08-04T07:01:32Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Dual-level Semantic Transfer Deep Hashing for Efficient Social Image
Retrieval [35.78137004253608]
Social network stores and disseminates a tremendous amount of user shared images.
Deep hashing is an efficient indexing technique to support large-scale social image retrieval.
Existing methods suffer from severe semantic shortage when optimizing a large amount of deep neural network parameters.
We propose a Dual-level Semantic Transfer Deep Hashing (DSTDH) method to alleviate this problem.
arXiv Detail & Related papers (2020-06-10T01:03:09Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - A Survey on Deep Hashing Methods [52.326472103233854]
Nearest neighbor search aims to obtain the samples in the database with the smallest distances from them to the queries.
With the development of deep learning, deep hashing methods show more advantages than traditional methods.
Deep supervised hashing is categorized into pairwise methods, ranking-based methods, pointwise methods and quantization.
Deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods.
arXiv Detail & Related papers (2020-03-04T08:25:15Z) - Image Hashing by Minimizing Discrete Component-wise Wasserstein Distance [12.968141477410597]
Adversarial autoencoders are shown to be able to implicitly learn a robust, locality-preserving hash function that generates balanced and high-quality hash codes.
The existing adversarial hashing methods are inefficient to be employed for large-scale image retrieval applications.
We propose a new adversarial-autoencoder hashing approach that has a much lower sample requirement and computational cost.
arXiv Detail & Related papers (2020-02-29T00:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.