CoReS: Compatible Representations via Stationarity
- URL: http://arxiv.org/abs/2111.07632v3
- Date: Tue, 28 Mar 2023 13:06:17 GMT
- Title: CoReS: Compatible Representations via Stationarity
- Authors: Niccolo Biondi and Federico Pernici and Matteo Bruni and Alberto Del
Bimbo
- Abstract summary: In visual search systems, compatible features enable the direct comparison of old and new learned features allowing to use them interchangeably over time.
We propose CoReS, a new training procedure to learn representations that are textitcompatible with those previously learned.
We demonstrate that our training procedure largely outperforms the current state of the art and is particularly effective in the case of multiple upgrades of the training-set.
- Score: 20.607894099896214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compatible features enable the direct comparison of old and new learned
features allowing to use them interchangeably over time. In visual search
systems, this eliminates the need to extract new features from the gallery-set
when the representation model is upgraded with novel data. This has a big value
in real applications as re-indexing the gallery-set can be computationally
expensive when the gallery-set is large, or even infeasible due to privacy or
other concerns of the application. In this paper, we propose CoReS, a new
training procedure to learn representations that are \textit{compatible} with
those previously learned, grounding on the stationarity of the features as
provided by fixed classifiers based on polytopes. With this solution, classes
are maximally separated in the representation space and maintain their spatial
configuration stationary as new classes are added, so that there is no need to
learn any mappings between representations nor to impose pairwise training with
the previously learned model. We demonstrate that our training procedure
largely outperforms the current state of the art and is particularly effective
in the case of multiple upgrades of the training-set, which is the typical case
in real applications.
Related papers
- Dual Consolidation for Pre-Trained Model-Based Domain-Incremental Learning [64.1745161657794]
Domain-Incremental Learning (DIL) involves the progressive adaptation of a model to new concepts across different domains.
Recent advances in pre-trained models provide a solid foundation for DIL.
However, learning new concepts often results in the catastrophic forgetting of pre-trained knowledge.
We propose DUal ConsolidaTion (Duct) to unify and consolidate historical knowledge.
arXiv Detail & Related papers (2024-10-01T17:58:06Z) - Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer [20.96380700548786]
Visual retrieval systems face challenges when updating models with improved representations due to misalignment between the old and new representations.
Prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling.
In this paper, we address achieving a balance between backward compatibility and the performance of independently trained models.
arXiv Detail & Related papers (2024-08-16T15:05:28Z) - Stationary Representations: Optimally Approximating Compatibility and Implications for Improved Model Replacements [20.96380700548786]
Learning compatible representations enables the interchangeable use of semantic features as models are updated over time.
This is particularly relevant in search and retrieval systems where it is crucial to avoid reprocessing of the gallery images with the updated model.
We show that the stationary representations learned by the $d$-Simplex fixed classifier optimally approximate compatibility representation according to the two inequality constraints of its formal definition.
arXiv Detail & Related papers (2024-05-04T06:31:38Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Towards Universal Backward-Compatible Representation Learning [29.77801805854168]
backward-compatible representation learning is introduced to support backfill-free model upgrades.
We first introduce a new problem of universal backward-compatible representation learning, covering all possible data split in model upgrades.
We propose a simple yet effective method, dubbed Universal Backward- Training (UniBCT) with a novel structural prototype refinement algorithm.
arXiv Detail & Related papers (2022-03-03T09:23:51Z) - Subspace Regularizers for Few-Shot Class Incremental Learning [26.372024890126408]
We present a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
Our results show that simple geometric regularization of class representations offers an effective tool for continual learning.
arXiv Detail & Related papers (2021-10-13T22:19:53Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.