Voronoi Diagram Encoded Hashing
- URL: http://arxiv.org/abs/2508.02266v1
- Date: Mon, 04 Aug 2025 10:16:48 GMT
- Title: Voronoi Diagram Encoded Hashing
- Authors: Yang Xu, Kai Ming Ting,
- Abstract summary: Voronoi diagram is a suitable candidate because of its three properties.<n>We propose a simple and efficient no-learning binary hashing method, called Voronoi Diagram Encoded Hashing (VDeH)
- Score: 9.339307138969193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of learning to hash (L2H) is to derive data-dependent hash functions from a given data distribution in order to map data from the input space to a binary coding space. Despite the success of L2H, two observations have cast doubt on the source of the power of L2H, i.e., learning. First, a recent study shows that even using a version of locality sensitive hashing functions without learning achieves binary representations that have comparable accuracy as those of L2H, but with less time cost. Second, existing L2H methods are constrained to three types of hash functions: thresholding, hyperspheres, and hyperplanes only. In this paper, we unveil the potential of Voronoi diagrams in hashing. Voronoi diagram is a suitable candidate because of its three properties. This discovery has led us to propose a simple and efficient no-learning binary hashing method, called Voronoi Diagram Encoded Hashing (VDeH), which constructs a set of hash functions through a data-dependent similarity measure and produces independent binary bits through encoded hashing. We demonstrate through experiments on several benchmark datasets that VDeH achieves superior performance and lower computational cost compared to existing state-of-the-art methods under the same bit length.
Related papers
- A Lower Bound of Hash Codes' Performance [122.88252443695492]
In this paper, we prove that inter-class distinctiveness and intra-class compactness among hash codes determine the lower bound of hash codes' performance.
We then propose a surrogate model to fully exploit the above objective by estimating the posterior of hash codes and controlling it, which results in a low-bias optimization.
By testing on a series of hash-models, we obtain performance improvements among all of them, with an up to $26.5%$ increase in mean Average Precision and an up to $20.5%$ increase in accuracy.
arXiv Detail & Related papers (2022-10-12T03:30:56Z) - Hashing Learning with Hyper-Class Representation [8.206031417113987]
Existing unsupervised hash learning is a kind of attribute-centered calculation.
It may not accurately preserve the similarity between data.
In this paper, a hash algorithm is proposed with a hyper-class representation.
arXiv Detail & Related papers (2022-06-06T03:35:45Z) - DVHN: A Deep Hashing Framework for Large-scale Vehicle Re-identification [5.407157027628579]
We propose a deep hash-based vehicle re-identification framework, dubbed DVHN, which substantially reduces memory usage and promotes retrieval efficiency.
DVHN directly learns discrete compact binary hash codes for each image by jointly optimizing the feature learning network and the hash code generating module.
textbfDVHN of $2048$ bits can achieve 13.94% and 10.21% accuracy improvement in terms of textbfmAP and textbfRank@1 for textbfVehicleID (800) dataset.
arXiv Detail & Related papers (2021-12-09T14:11:27Z) - Deep Asymmetric Hashing with Dual Semantic Regression and Class
Structure Quantization [9.539842235137376]
We propose a dual semantic asymmetric hashing (DSAH) method, which generates discriminative hash codes under three-fold constrains.
With these three main components, high-quality hash codes can be generated through network.
arXiv Detail & Related papers (2021-10-24T16:14:36Z) - Online Hashing with Similarity Learning [31.372269816123996]
We propose a novel online hashing framework without updating binary codes.
In the proposed framework, the hash functions are fixed and a parametric similarity function for the binary codes is learnt online.
Experiments on two multi-label image datasets show that our method is competitive or outperforms the state-of-the-art online hashing methods.
arXiv Detail & Related papers (2021-07-04T12:42:29Z) - Unsupervised Multi-Index Semantic Hashing [23.169142004594434]
We propose an unsupervised hashing model that learns hash codes that are both effective and highly efficient by being optimized for multi-index hashing.
We experimentally compare MISH to state-of-the-art semantic hashing baselines in the task of document similarity search.
We find that even though multi-index hashing also improves the efficiency of the baselines compared to a linear scan, they are still upwards of 33% slower than MISH.
arXiv Detail & Related papers (2021-03-26T13:33:48Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Making Online Sketching Hashing Even Faster [63.16042585506435]
We present a FasteR Online Sketching Hashing (FROSH) algorithm to sketch the data in a more compact form via an independent transformation.
We provide theoretical justification to guarantee that our proposed FROSH consumes less time and achieves a comparable sketching precision.
We also extend FROSH to its distributed implementation, namely DFROSH, to further reduce the training time cost of FROSH.
arXiv Detail & Related papers (2020-10-10T08:50:53Z) - Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings [65.36757931982469]
Image hash codes are produced by binarizing embeddings of convolutional neural networks (CNN) trained for either classification or retrieval.
The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity.
The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error.
arXiv Detail & Related papers (2020-07-27T23:47:43Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.