Learning Compatible Embeddings
- URL: http://arxiv.org/abs/2108.01958v1
- Date: Wed, 4 Aug 2021 10:48:41 GMT
- Title: Learning Compatible Embeddings
- Authors: Qiang Meng, Chixiang Zhang, Xiaoqiang Xu, Feng Zhou
- Abstract summary: backward compatibility when rolling out new models can highly reduce costs or even bypass feature re-encoding of existing gallery images for in-production visual retrieval systems.
We propose a general framework called Learning Compatible Embeddings (LCE) which is applicable for both cross model compatibility and compatible training in direct/forward/backward manners.
- Score: 4.926613940939671
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving backward compatibility when rolling out new models can highly
reduce costs or even bypass feature re-encoding of existing gallery images for
in-production visual retrieval systems. Previous related works usually leverage
losses used in knowledge distillation which can cause performance degradations
or not guarantee compatibility. To address these issues, we propose a general
framework called Learning Compatible Embeddings (LCE) which is applicable for
both cross model compatibility and compatible training in
direct/forward/backward manners. Our compatibility is achieved by aligning
class centers between models directly or via a transformation, and restricting
more compact intra-class distributions for the new model. Experiments are
conducted in extensive scenarios such as changes of training dataset, loss
functions, network architectures as well as feature dimensions, and demonstrate
that LCE efficiently enables model compatibility with marginal sacrifices of
accuracies. The code will be available at https://github.com/IrvingMeng/LCE.
Related papers
- Stationary Representations: Optimally Approximating Compatibility and Implications for Improved Model Replacements [20.96380700548786]
Learning compatible representations enables the interchangeable use of semantic features as models are updated over time.
This is particularly relevant in search and retrieval systems where it is crucial to avoid reprocessing of the gallery images with the updated model.
We show that the stationary representations learned by the $d$-Simplex fixed classifier optimally approximate compatibility representation according to the two inequality constraints of its formal definition.
arXiv Detail & Related papers (2024-05-04T06:31:38Z) - MixBCT: Towards Self-Adapting Backward-Compatible Training [66.52766344751635]
We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
arXiv Detail & Related papers (2023-08-14T05:55:38Z) - Boundary-aware Backward-Compatible Representation via Adversarial
Learning in Image Retrieval [17.995993499100017]
Backward-compatible training (BCT) improves the compatibility of two models with less negative impact on retrieval performance.
We introduce AdvBCT, an Adversarial Backward-Training method with an elastic boundary constraint.
Our method outperforms other BCT methods on both compatibility and discrimination.
arXiv Detail & Related papers (2023-05-04T07:37:07Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Learning Backward Compatible Embeddings [74.74171220055766]
We study the problem of embedding version updates and their backward compatibility.
We develop a solution based on learning backward compatible embeddings.
We show that the best method, which we call BC-Aligner, maintains backward compatibility with existing unintended tasks even after multiple model version updates.
arXiv Detail & Related papers (2022-06-07T06:30:34Z) - Towards Universal Backward-Compatible Representation Learning [29.77801805854168]
backward-compatible representation learning is introduced to support backfill-free model upgrades.
We first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades.
We propose a simple yet effective method, dubbed Universal Backward- Training (UniBCT) with a novel structural prototype refinement algorithm.
arXiv Detail & Related papers (2022-03-03T09:23:51Z) - Forward Compatible Training for Representation Learning [53.300192863727226]
backward compatible training (BCT) modifies training of the new model to make its representations compatible with those of the old model.
BCT can significantly hinder the performance of the new model.
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT)
arXiv Detail & Related papers (2021-12-06T06:18:54Z) - An Empirical Analysis of Backward Compatibility in Machine Learning
Systems [47.04803977692586]
We consider how updates, intended to improve ML models, can introduce new errors that can significantly affect downstream systems and users.
For example, updates in models used in cloud-based classification services, such as image recognition, can cause unexpected erroneous behavior.
arXiv Detail & Related papers (2020-08-11T08:10:58Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.