Fast Class-wise Updating for Online Hashing
- URL: http://arxiv.org/abs/2012.00318v1
- Date: Tue, 1 Dec 2020 07:41:54 GMT
- Title: Fast Class-wise Updating for Online Hashing
- Authors: Mingbao Lin, Rongrong Ji, Xiaoshuai Sun, Baochang Zhang, Feiyue Huang,
Yonghong Tian, Dacheng Tao
- Abstract summary: This paper presents a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH)
A class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches.
To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently.
- Score: 196.14748396106955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online image hashing has received increasing research attention recently,
which processes large-scale data in a streaming fashion to update the hash
functions on-the-fly. To this end, most existing works exploit this problem
under a supervised setting, i.e., using class labels to boost the hashing
performance, which suffers from the defects in both adaptivity and efficiency:
First, large amounts of training batches are required to learn up-to-date hash
functions, which leads to poor online adaptivity. Second, the training is
time-consuming, which contradicts with the core need of online learning. In
this paper, a novel supervised online hashing scheme, termed Fast Class-wise
Updating for Online Hashing (FCOH), is proposed to address the above two
challenges by introducing a novel and efficient inner product operation. To
achieve fast online adaptivity, a class-wise updating method is developed to
decompose the binary code learning and alternatively renew the hash functions
in a class-wise fashion, which well addresses the burden on large amounts of
training batches. Quantitatively, such a decomposition further leads to at
least 75% storage saving. To further achieve online efficiency, we propose a
semi-relaxation optimization, which accelerates the online training by treating
different binary constraints independently. Without additional constraints and
variables, the time complexity is significantly reduced. Such a scheme is also
quantitatively shown to well preserve past information during updating hashing
functions. We have quantitatively demonstrated that the collective effort of
class-wise updating and semi-relaxation optimization provides a superior
performance comparing to various state-of-the-art methods, which is verified
through extensive experiments on three widely-used datasets.
Related papers
- Online-BLS: An Accurate and Efficient Online Broad Learning System for Data Stream Classification [52.251569042852815]
We introduce an online broad learning system framework with closed-form solutions for each online update.
We design an effective weight estimation algorithm and an efficient online updating strategy.
Our framework is naturally extended to data stream scenarios with concept drift and exceeds state-of-the-art baselines.
arXiv Detail & Related papers (2025-01-28T13:21:59Z) - KALAHash: Knowledge-Anchored Low-Resource Adaptation for Deep Hashing [19.667480064079083]
Existing deep hashing methods rely on abundant training data, leaving the more challenging scenario of low-resource adaptation relatively underexplored.
We introduce Class-Calibration LoRA, a novel plug-and-play approach that dynamically constructs low-rank adaptation by leveraging class-level textual knowledge embeddings.
Our proposed method, Knowledge- Anchored Low-Resource Adaptation Hashing (KALAHash), significantly boosts retrieval performance and achieves a 4x data efficiency in low-resource scenarios.
arXiv Detail & Related papers (2024-12-27T03:04:54Z) - A Flexible Plug-and-Play Module for Generating Variable-Length [61.095479786194836]
Nested Hash Layer (NHL) is a plug-and-play module designed for existing deep supervised hashing models.
NHL simultaneously generates hash codes of varying lengths in a nested manner.
NHL achieves superior retrieval performance across various deep hashing models.
arXiv Detail & Related papers (2024-12-12T04:13:09Z) - Deep Lifelong Cross-modal Hashing [17.278818467305683]
We propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly.
Specifically, we design lifelong learning strategy to update hash functions by directly training the incremental data instead of retraining new hash functions using all the accumulated data.
It yields substantial average over 20% in retrieval accuracy and almost reduces over 80% training time when new data arrives continuously.
arXiv Detail & Related papers (2023-04-26T07:56:22Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Online Enhanced Semantic Hashing: Towards Effective and Efficient
Retrieval for Streaming Multi-Modal Data [21.157717777481572]
We propose a new model, termed Online enhAnced SemantIc haShing (OASIS)
We design novel semantic-enhanced representation for data, which could help handle the new coming classes.
Our method can exceed the state-of-the-art models.
arXiv Detail & Related papers (2021-09-09T13:30:31Z) - Online Hashing with Similarity Learning [31.372269816123996]
We propose a novel online hashing framework without updating binary codes.
In the proposed framework, the hash functions are fixed and a parametric similarity function for the binary codes is learnt online.
Experiments on two multi-label image datasets show that our method is competitive or outperforms the state-of-the-art online hashing methods.
arXiv Detail & Related papers (2021-07-04T12:42:29Z) - Making Online Sketching Hashing Even Faster [63.16042585506435]
We present a FasteR Online Sketching Hashing (FROSH) algorithm to sketch the data in a more compact form via an independent transformation.
We provide theoretical justification to guarantee that our proposed FROSH consumes less time and achieves a comparable sketching precision.
We also extend FROSH to its distributed implementation, namely DFROSH, to further reduce the training time cost of FROSH.
arXiv Detail & Related papers (2020-10-10T08:50:53Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.