Fast Class-wise Updating for Online Hashing
- URL: http://arxiv.org/abs/2012.00318v1
- Date: Tue, 1 Dec 2020 07:41:54 GMT
- Title: Fast Class-wise Updating for Online Hashing
- Authors: Mingbao Lin, Rongrong Ji, Xiaoshuai Sun, Baochang Zhang, Feiyue Huang,
Yonghong Tian, Dacheng Tao
- Abstract summary: This paper presents a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH)
A class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches.
To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently.
- Score: 196.14748396106955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online image hashing has received increasing research attention recently,
which processes large-scale data in a streaming fashion to update the hash
functions on-the-fly. To this end, most existing works exploit this problem
under a supervised setting, i.e., using class labels to boost the hashing
performance, which suffers from the defects in both adaptivity and efficiency:
First, large amounts of training batches are required to learn up-to-date hash
functions, which leads to poor online adaptivity. Second, the training is
time-consuming, which contradicts with the core need of online learning. In
this paper, a novel supervised online hashing scheme, termed Fast Class-wise
Updating for Online Hashing (FCOH), is proposed to address the above two
challenges by introducing a novel and efficient inner product operation. To
achieve fast online adaptivity, a class-wise updating method is developed to
decompose the binary code learning and alternatively renew the hash functions
in a class-wise fashion, which well addresses the burden on large amounts of
training batches. Quantitatively, such a decomposition further leads to at
least 75% storage saving. To further achieve online efficiency, we propose a
semi-relaxation optimization, which accelerates the online training by treating
different binary constraints independently. Without additional constraints and
variables, the time complexity is significantly reduced. Such a scheme is also
quantitatively shown to well preserve past information during updating hashing
functions. We have quantitatively demonstrated that the collective effort of
class-wise updating and semi-relaxation optimization provides a superior
performance comparing to various state-of-the-art methods, which is verified
through extensive experiments on three widely-used datasets.
Related papers
- Online Feature Updates Improve Online (Generalized) Label Shift Adaptation [51.328801874640675]
Our novel method, Online Label Shift adaptation with Online Feature Updates (OLS-OFU), leverages self-supervised learning to refine the feature extraction process.
By carefully designing the algorithm, OLS-OFU maintains the similar online regret convergence to the results in the literature while taking the improved features into account.
arXiv Detail & Related papers (2024-02-05T22:03:25Z) - Deep Lifelong Cross-modal Hashing [17.278818467305683]
We propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly.
Specifically, we design lifelong learning strategy to update hash functions by directly training the incremental data instead of retraining new hash functions using all the accumulated data.
It yields substantial average over 20% in retrieval accuracy and almost reduces over 80% training time when new data arrives continuously.
arXiv Detail & Related papers (2023-04-26T07:56:22Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Neural Architecture for Online Ensemble Continual Learning [6.241435193861262]
We present a fully differentiable ensemble method that allows us to efficiently train an ensemble of neural networks in the end-to-end regime.
The proposed technique achieves SOTA results without a memory buffer and clearly outperforms the reference methods.
arXiv Detail & Related papers (2022-11-27T23:17:08Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Online Enhanced Semantic Hashing: Towards Effective and Efficient
Retrieval for Streaming Multi-Modal Data [21.157717777481572]
We propose a new model, termed Online enhAnced SemantIc haShing (OASIS)
We design novel semantic-enhanced representation for data, which could help handle the new coming classes.
Our method can exceed the state-of-the-art models.
arXiv Detail & Related papers (2021-09-09T13:30:31Z) - Online Hashing with Similarity Learning [31.372269816123996]
We propose a novel online hashing framework without updating binary codes.
In the proposed framework, the hash functions are fixed and a parametric similarity function for the binary codes is learnt online.
Experiments on two multi-label image datasets show that our method is competitive or outperforms the state-of-the-art online hashing methods.
arXiv Detail & Related papers (2021-07-04T12:42:29Z) - FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval [41.125141897096874]
Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
arXiv Detail & Related papers (2021-05-15T03:53:48Z) - Making Online Sketching Hashing Even Faster [63.16042585506435]
We present a FasteR Online Sketching Hashing (FROSH) algorithm to sketch the data in a more compact form via an independent transformation.
We provide theoretical justification to guarantee that our proposed FROSH consumes less time and achieves a comparable sketching precision.
We also extend FROSH to its distributed implementation, namely DFROSH, to further reduce the training time cost of FROSH.
arXiv Detail & Related papers (2020-10-10T08:50:53Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.