MixBCT: Towards Self-Adapting Backward-Compatible Training
- URL: http://arxiv.org/abs/2308.06948v2
- Date: Mon, 27 May 2024 01:17:51 GMT
- Title: MixBCT: Towards Self-Adapting Backward-Compatible Training
- Authors: Yu Liang, Yufeng Zhang, Shiliang Zhang, Yaowei Wang, Sheng Xiao, Rong Xiao, Xiaoyu Wang,
- Abstract summary: We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
- Score: 66.52766344751635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backward-compatible training circumvents the need for expensive updates to the old gallery database when deploying an advanced new model in the retrieval system. Previous methods achieved backward compatibility by aligning prototypes of the new model with the old one, yet they often overlooked the distribution of old features, limiting their effectiveness when the low quality of the old model results in a weakly feature discriminability. Instance-based methods like L2 regression take into account the distribution of old features but impose strong constraints on the performance of the new model itself. In this paper, we propose MixBCT, a simple yet highly effective backward-compatible training method that serves as a unified framework for old models of varying qualities. We construct a single loss function applied to mixed old and new features to facilitate backward-compatible training, which adaptively adjusts the constraint domain for new features based on the distribution of old features. We conducted extensive experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C to verify the effectiveness of our method. The experimental results clearly demonstrate its superiority over previous methods. Code is available at https://github.com/yuleung/MixBCT .
Related papers
- Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer [20.96380700548786]
Visual retrieval systems face challenges when updating models with improved representations due to misalignment between the old and new representations.
Prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling.
In this paper, we address achieving a balance between backward compatibility and the performance of independently trained models.
arXiv Detail & Related papers (2024-08-16T15:05:28Z) - Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models [44.56258991182532]
Backward-compatible Training (BT) has been proposed to ensure that the new model aligns with the old model's embeddings.
This paper extends the concept of vision-only BT to the field of cross-modal retrieval.
We propose a projection module that maps the new model's embeddings to those of the old model.
arXiv Detail & Related papers (2024-05-23T15:46:35Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Boundary-aware Backward-Compatible Representation via Adversarial
Learning in Image Retrieval [17.995993499100017]
Backward-compatible training (BCT) improves the compatibility of two models with less negative impact on retrieval performance.
We introduce AdvBCT, an Adversarial Backward-Training method with an elastic boundary constraint.
Our method outperforms other BCT methods on both compatibility and discrimination.
arXiv Detail & Related papers (2023-05-04T07:37:07Z) - Darwinian Model Upgrades: Model Evolving with Selective Compatibility [29.920204547961696]
BCT presents the first step towards backward-compatible model upgrades to get rid of backfilling.
We propose Darwinian Model Upgrades (DMU) which disentangle the inheritance and variation in the model evolving with selective backward compatibility and forward adaptation.
DMU effectively alleviates the new-to-new degradation and improves new-to-old compatibility, rendering a more proper model upgrading paradigm in large-scale retrieval systems.
arXiv Detail & Related papers (2022-10-13T12:28:48Z) - Towards Universal Backward-Compatible Representation Learning [29.77801805854168]
backward-compatible representation learning is introduced to support backfill-free model upgrades.
We first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades.
We propose a simple yet effective method, dubbed Universal Backward- Training (UniBCT) with a novel structural prototype refinement algorithm.
arXiv Detail & Related papers (2022-03-03T09:23:51Z) - Forward Compatible Training for Representation Learning [53.300192863727226]
backward compatible training (BCT) modifies training of the new model to make its representations compatible with those of the old model.
BCT can significantly hinder the performance of the new model.
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT)
arXiv Detail & Related papers (2021-12-06T06:18:54Z) - Neighborhood Consensus Contrastive Learning for Backward-Compatible
Representation [46.86784621137665]
backward-compatible representation is proposed to enable the "new" features compatible with "old"' features.
We propose a Neighborhood Consensus Contrastive Learning (NCCL) method, which learns backward-compatible representation from a neighborhood consensus perspective.
Our method ensures backward compatibility without impairing the accuracy of the new model.
arXiv Detail & Related papers (2021-08-07T05:50:47Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.