Beyond Neighbourhood-Preserving Transformations for Quantization-Based
Unsupervised Hashing
- URL: http://arxiv.org/abs/2110.00216v1
- Date: Fri, 1 Oct 2021 05:13:01 GMT
- Title: Beyond Neighbourhood-Preserving Transformations for Quantization-Based
Unsupervised Hashing
- Authors: Sobhan Hemati, H.R. Tizhoosh
- Abstract summary: An effective unsupervised hashing algorithm leads to compact binary codes preserving the neighborhood structure of data as much as possible.
Although employing rigid transformations is effective, we may not reduce quantization loss to the ultimate limits.
Motivated by these shortcomings, we propose to employ both rigid and non-rigid transformations to reduce quantization error and dimensionality simultaneously.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An effective unsupervised hashing algorithm leads to compact binary codes
preserving the neighborhood structure of data as much as possible. One of the
most established schemes for unsupervised hashing is to reduce the
dimensionality of data and then find a rigid (neighbourhood-preserving)
transformation that reduces the quantization error. Although employing rigid
transformations is effective, we may not reduce quantization loss to the
ultimate limits. As well, reducing dimensionality and quantization loss in two
separate steps seems to be sub-optimal. Motivated by these shortcomings, we
propose to employ both rigid and non-rigid transformations to reduce
quantization error and dimensionality simultaneously. We relax the
orthogonality constraint on the projection in a PCA-formulation and regularize
this by a quantization term. We show that both the non-rigid projection matrix
and rotation matrix contribute towards minimizing quantization loss but in
different ways. A scalable nested coordinate descent approach is proposed to
optimize this mixed-integer optimization problem. We evaluate the proposed
method on five public benchmark datasets providing almost half a million
images. Comparative results indicate that the proposed method mostly
outperforms state-of-art linear methods and competes with end-to-end deep
solutions.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Deep Hashing via Householder Quantization [3.106177436374861]
Hashing is at the heart of large-scale image similarity search.
A common solution is to employ loss functions that combine a similarity learning term and a quantization penalty term.
We propose an alternative quantization strategy that decomposes the learning problem in two stages.
arXiv Detail & Related papers (2023-11-07T18:47:28Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Towards Mixed-Precision Quantization of Neural Networks via Constrained
Optimization [28.76708310896311]
We present a principled framework to solve the mixed-precision quantization problem.
We show that our method is derived in a principled way and much more computationally efficient.
arXiv Detail & Related papers (2021-10-13T08:09:26Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Revisiting Co-Occurring Directions: Sharper Analysis and Efficient
Algorithm for Sparse Matrices [23.22254890452548]
We study the streaming model for approximate matrix multiplication (AMM)
We are interested in the scenario that the algorithm can only take one pass over the data with limited memory.
The state-of-the-art deterministic sketching algorithm for streaming AMM is the co-occurring directions (COD)
arXiv Detail & Related papers (2020-09-05T15:35:59Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.