Towards Backward-Compatible Representation Learning
- URL: http://arxiv.org/abs/2003.11942v3
- Date: Wed, 6 Jan 2021 06:59:07 GMT
- Title: Towards Backward-Compatible Representation Learning
- Authors: Yantao Shen, Yuanjun Xiong, Wei Xia, and Stefano Soatto
- Abstract summary: We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
- Score: 86.39292571306395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a way to learn visual features that are compatible with previously
computed ones even when they have different dimensions and are learned via
different neural network architectures and loss functions. Compatible means
that, if such features are used to compare images, then "new" features can be
compared directly to "old" features, so they can be used interchangeably. This
enables visual search systems to bypass computing new features for all
previously seen images when updating the embedding models, a process known as
backfilling. Backward compatibility is critical to quickly deploy new embedding
models that leverage ever-growing large-scale training datasets and
improvements in deep learning architectures and training methods. We propose a
framework to train embedding models, called backward-compatible training (BCT),
as a first step towards backward compatible representation learning. In
experiments on learning embeddings for face recognition, models trained with
BCT successfully achieve backward compatibility without sacrificing accuracy,
thus enabling backfill-free model updates of visual embeddings.
Related papers
- Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer [20.96380700548786]
Visual retrieval systems face challenges when updating models with improved representations due to misalignment between the old and new representations.
Prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling.
In this paper, we address achieving a balance between backward compatibility and the performance of independently trained models.
arXiv Detail & Related papers (2024-08-16T15:05:28Z) - MixBCT: Towards Self-Adapting Backward-Compatible Training [66.52766344751635]
We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
arXiv Detail & Related papers (2023-08-14T05:55:38Z) - Boundary-aware Backward-Compatible Representation via Adversarial
Learning in Image Retrieval [17.995993499100017]
Backward-compatible training (BCT) improves the compatibility of two models with less negative impact on retrieval performance.
We introduce AdvBCT, an Adversarial Backward-Training method with an elastic boundary constraint.
Our method outperforms other BCT methods on both compatibility and discrimination.
arXiv Detail & Related papers (2023-05-04T07:37:07Z) - Forward Compatible Few-Shot Class-Incremental Learning [71.2459746681805]
A machine learning model should recognize new classes without forgetting old ones.
Current methods handle incremental learning retrospectively.
We propose ForwArd Compatible Training (FACT) for FSCIL.
arXiv Detail & Related papers (2022-03-14T09:36:35Z) - Towards Universal Backward-Compatible Representation Learning [29.77801805854168]
backward-compatible representation learning is introduced to support backfill-free model upgrades.
We first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades.
We propose a simple yet effective method, dubbed Universal Backward- Training (UniBCT) with a novel structural prototype refinement algorithm.
arXiv Detail & Related papers (2022-03-03T09:23:51Z) - Forward Compatible Training for Representation Learning [53.300192863727226]
backward compatible training (BCT) modifies training of the new model to make its representations compatible with those of the old model.
BCT can significantly hinder the performance of the new model.
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT)
arXiv Detail & Related papers (2021-12-06T06:18:54Z) - Neighborhood Consensus Contrastive Learning for Backward-Compatible
Representation [46.86784621137665]
backward-compatible representation is proposed to enable the "new" features compatible with "old"' features.
We propose a Neighborhood Consensus Contrastive Learning (NCCL) method, which learns backward-compatible representation from a neighborhood consensus perspective.
Our method ensures backward compatibility without impairing the accuracy of the new model.
arXiv Detail & Related papers (2021-08-07T05:50:47Z) - Learning Compatible Embeddings [4.926613940939671]
backward compatibility when rolling out new models can highly reduce costs or even bypass feature re-encoding of existing gallery images for in-production visual retrieval systems.
We propose a general framework called Learning Compatible Embeddings (LCE) which is applicable for both cross model compatibility and compatible training in direct/forward/backward manners.
arXiv Detail & Related papers (2021-08-04T10:48:41Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.