Towards Universal Backward-Compatible Representation Learning
- URL: http://arxiv.org/abs/2203.01583v1
- Date: Thu, 3 Mar 2022 09:23:51 GMT
- Title: Towards Universal Backward-Compatible Representation Learning
- Authors: Binjie Zhang, Yixiao Ge, Yantao Shen, Shupeng Su, Chun Yuan, Xuyuan
Xu, Yexin Wang, Ying Shan
- Abstract summary: backward-compatible representation learning is introduced to support backfill-free model upgrades.
We first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades.
We propose a simple yet effective method, dubbed Universal Backward- Training (UniBCT) with a novel structural prototype refinement algorithm.
- Score: 29.77801805854168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional model upgrades for visual search systems require offline refresh
of gallery features by feeding gallery images into new models (dubbed as
"backfill"), which is time-consuming and expensive, especially in large-scale
applications. The task of backward-compatible representation learning is
therefore introduced to support backfill-free model upgrades, where the new
query features are interoperable with the old gallery features. Despite the
success, previous works only investigated a close-set training scenario (i.e.,
the new training set shares the same classes as the old one), and are limited
by more realistic and challenging open-set scenarios. To this end, we first
introduce a new problem of universal backward-compatible representation
learning, covering all possible data split in model upgrades. We further
propose a simple yet effective method, dubbed as Universal Backward-Compatible
Training (UniBCT) with a novel structural prototype refinement algorithm, to
learn compatible representations in all kinds of model upgrading benchmarks in
a unified manner. Comprehensive experiments on the large-scale face recognition
datasets MS1Mv3 and IJB-C fully demonstrate the effectiveness of our method.
Related papers
- Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer [20.96380700548786]
Visual retrieval systems face challenges when updating models with improved representations due to misalignment between the old and new representations.
Prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling.
In this paper, we address achieving a balance between backward compatibility and the performance of independently trained models.
arXiv Detail & Related papers (2024-08-16T15:05:28Z) - MixBCT: Towards Self-Adapting Backward-Compatible Training [66.52766344751635]
We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
arXiv Detail & Related papers (2023-08-14T05:55:38Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Online Backfilling with No Regret for Large-Scale Image Retrieval [50.162438586686356]
Backfilling is the process of re-extracting all gallery embeddings from upgraded models in image retrieval systems.
We propose an online backfilling algorithm, which enables us to achieve a progressive performance improvement during the backfilling process.
We incorporate a reverse transformation module for more effective and efficient merging, which is further enhanced by adopting a metric-compatible contrastive learning approach.
arXiv Detail & Related papers (2023-01-10T03:10:32Z) - $BT^2$: Backward-compatible Training with Basis Transformation [107.37014712361788]
Retrieval system often requires recomputing the representation of every piece of data in the gallery when updating to a better representation model.
This process is known as backfilling and can be especially costly in the real world where the gallery often contains billions of samples.
Recently, researchers have proposed the idea of Backward compatible Training (BCT) where the new representation model can be trained with an auxiliary loss to make it backward compatible with the old representation.
arXiv Detail & Related papers (2022-11-08T04:00:23Z) - Privacy-Preserving Model Upgrades with Bidirectional Compatible Training
in Image Retrieval [28.268764435617975]
We propose a new model upgrade paradigm, termed Bidirectional Compatible Training (BiCT)
BiCT upgrades the old gallery embeddings by forward-compatible training towards the embedding space of the backward-compatible new model.
We conduct comprehensive experiments to verify the prominent improvement by BiCT and observe that the inconspicuous loss weight of backward compatibility actually plays an essential role for both backward and forward retrieval performance.
arXiv Detail & Related papers (2022-04-29T07:38:09Z) - Forward Compatible Training for Representation Learning [53.300192863727226]
backward compatible training (BCT) modifies training of the new model to make its representations compatible with those of the old model.
BCT can significantly hinder the performance of the new model.
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT)
arXiv Detail & Related papers (2021-12-06T06:18:54Z) - CoReS: Compatible Representations via Stationarity [20.607894099896214]
In visual search systems, compatible features enable the direct comparison of old and new learned features allowing to use them interchangeably over time.
We propose CoReS, a new training procedure to learn representations that are textitcompatible with those previously learned.
We demonstrate that our training procedure largely outperforms the current state of the art and is particularly effective in the case of multiple upgrades of the training-set.
arXiv Detail & Related papers (2021-11-15T09:35:54Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.