Asymmetric Correlation Quantization Hashing for Cross-modal Retrieval
- URL: http://arxiv.org/abs/2001.04625v1
- Date: Tue, 14 Jan 2020 04:53:30 GMT
- Title: Asymmetric Correlation Quantization Hashing for Cross-modal Retrieval
- Authors: Lu Wang, Jie Yang
- Abstract summary: Cross-modal hashing methods have attracted extensive attention in similarity retrieval across the heterogeneous modalities.
ACQH is a novel Asymmetric Correlation Quantization Hashing (ACQH) method proposed in this paper.
It learns the projection matrixs of heterogeneous modalities data points for transforming query into a low-dimensional real-valued vector in latent semantic space.
It constructs the stacked compositional quantization embedding in a coarse-to-fine manner for indicating database points by a series of learnt real-valued codeword.
- Score: 11.988383965639954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the superiority in similarity computation and database storage for
large-scale multiple modalities data, cross-modal hashing methods have
attracted extensive attention in similarity retrieval across the heterogeneous
modalities. However, there are still some limitations to be further taken into
account: (1) most current CMH methods transform real-valued data points into
discrete compact binary codes under the binary constraints, limiting the
capability of representation for original data on account of abundant loss of
information and producing suboptimal hash codes; (2) the discrete binary
constraint learning model is hard to solve, where the retrieval performance may
greatly reduce by relaxing the binary constraints for large quantization error;
(3) handling the learning problem of CMH in a symmetric framework, leading to
difficult and complex optimization objective. To address above challenges, in
this paper, a novel Asymmetric Correlation Quantization Hashing (ACQH) method
is proposed. Specifically, ACQH learns the projection matrixs of heterogeneous
modalities data points for transforming query into a low-dimensional
real-valued vector in latent semantic space and constructs the stacked
compositional quantization embedding in a coarse-to-fine manner for indicating
database points by a series of learnt real-valued codeword in the codebook with
the help of pointwise label information regression simultaneously. Besides, the
unified hash codes across modalities can be directly obtained by the discrete
iterative optimization framework devised in the paper. Comprehensive
experiments on diverse three benchmark datasets have shown the effectiveness
and rationality of ACQH.
Related papers
- Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Recovering Simultaneously Structured Data via Non-Convex Iteratively
Reweighted Least Squares [0.8702432681310401]
We propose a new algorithm for recovering data that adheres to multiple, heterogeneous low-dimensional structures from linear observations.
We show that the IRLS method favorable in identifying low/comckuele state measurements.
arXiv Detail & Related papers (2023-06-08T06:35:47Z) - An Instance Selection Algorithm for Big Data in High imbalanced datasets
based on LSH [0.0]
Training Machine Learning models in real contexts often deals with big data sets and imbalance samples where the class of interest is unrepresented.
This work proposes three new methods for instance selection (IS) to be able to deal with large and imbalanced data sets.
Algorithms were developed in the Apache Spark framework, guaranteeing their scalability.
arXiv Detail & Related papers (2022-10-09T17:38:41Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z) - Deep Robust Multilevel Semantic Cross-Modal Hashing [25.895586911858857]
Hashing based cross-modal retrieval has recently made significant progress.
But straightforward embedding data from different modalities into a joint Hamming space will inevitably produce false codes.
We present a novel Robust Multilevel Semantic Hashing (RMSH) for more accurate cross-modal retrieval.
arXiv Detail & Related papers (2020-02-07T10:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.