Boundary-aware Backward-Compatible Representation via Adversarial
Learning in Image Retrieval
- URL: http://arxiv.org/abs/2305.02610v1
- Date: Thu, 4 May 2023 07:37:07 GMT
- Title: Boundary-aware Backward-Compatible Representation via Adversarial
Learning in Image Retrieval
- Authors: Tan Pan, Furong Xu, Xudong Yang, Sifeng He, Chen Jiang, Qingpei Guo,
Feng Qian Xiaobo Zhang, Yuan Cheng, Lei Yang, Wei Chu
- Abstract summary: Backward-compatible training (BCT) improves the compatibility of two models with less negative impact on retrieval performance.
We introduce AdvBCT, an Adversarial Backward-Training method with an elastic boundary constraint.
Our method outperforms other BCT methods on both compatibility and discrimination.
- Score: 17.995993499100017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image retrieval plays an important role in the Internet world. Usually, the
core parts of mainstream visual retrieval systems include an online service of
the embedding model and a large-scale vector database. For traditional model
upgrades, the old model will not be replaced by the new one until the
embeddings of all the images in the database are re-computed by the new model,
which takes days or weeks for a large amount of data. Recently,
backward-compatible training (BCT) enables the new model to be immediately
deployed online by making the new embeddings directly comparable to the old
ones. For BCT, improving the compatibility of two models with less negative
impact on retrieval performance is the key challenge. In this paper, we
introduce AdvBCT, an Adversarial Backward-Compatible Training method with an
elastic boundary constraint that takes both compatibility and discrimination
into consideration. We first employ adversarial learning to minimize the
distribution disparity between embeddings of the new model and the old model.
Meanwhile, we add an elastic boundary constraint during training to improve
compatibility and discrimination efficiently. Extensive experiments on GLDv2,
Revisited Oxford (ROxford), and Revisited Paris (RParis) demonstrate that our
method outperforms other BCT methods on both compatibility and discrimination.
The implementation of AdvBCT will be publicly available at
https://github.com/Ashespt/AdvBCT.
Related papers
- Towards Cross-modal Backward-compatible Representation Learning for Vision-Language Models [44.56258991182532]
Backward-compatible Training (BT) has been proposed to ensure that the new model aligns with the old model's embeddings.
This paper extends the concept of vision-only BT to the field of cross-modal retrieval.
We propose a projection module that maps the new model's embeddings to those of the old model.
arXiv Detail & Related papers (2024-05-23T15:46:35Z) - MixBCT: Towards Self-Adapting Backward-Compatible Training [66.52766344751635]
We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
arXiv Detail & Related papers (2023-08-14T05:55:38Z) - Learning Backward Compatible Embeddings [74.74171220055766]
We study the problem of embedding version updates and their backward compatibility.
We develop a solution based on learning backward compatible embeddings.
We show that the best method, which we call BC-Aligner, maintains backward compatibility with existing unintended tasks even after multiple model version updates.
arXiv Detail & Related papers (2022-06-07T06:30:34Z) - R-DFCIL: Relation-Guided Representation Learning for Data-Free Class
Incremental Learning [64.7996065569457]
Class-Incremental Learning (CIL) struggles with catastrophic forgetting when learning new knowledge.
Recent DFCIL works introduce techniques such as model inversion to synthesize data for previous classes, they fail to overcome forgetting due to the severe domain gap between the synthetic and real data.
This paper proposes relation-guided representation learning (RRL) for DFCIL, dubbed R-DFCIL.
arXiv Detail & Related papers (2022-03-24T14:54:15Z) - Forward Compatible Training for Representation Learning [53.300192863727226]
backward compatible training (BCT) modifies training of the new model to make its representations compatible with those of the old model.
BCT can significantly hinder the performance of the new model.
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT)
arXiv Detail & Related papers (2021-12-06T06:18:54Z) - Learning Compatible Embeddings [4.926613940939671]
backward compatibility when rolling out new models can highly reduce costs or even bypass feature re-encoding of existing gallery images for in-production visual retrieval systems.
We propose a general framework called Learning Compatible Embeddings (LCE) which is applicable for both cross model compatibility and compatible training in direct/forward/backward manners.
arXiv Detail & Related papers (2021-08-04T10:48:41Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.