Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning
- URL: http://arxiv.org/abs/2506.05826v1
- Date: Fri, 06 Jun 2025 07:53:40 GMT
- Title: Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning
- Authors: Ngoc Bui, Menglin Yang, Runjin Chen, Leonardo Neves, Mingxuan Ju, Rex Ying, Neil Shah, Tong Zhao,
- Abstract summary: backward compatible representation learning enables updated models to integrate seamlessly with existing ones, avoiding to reprocess stored data.<n>We propose to switch perspectives to hyperbolic geometry, where we treat time as a natural axis for capturing a model's confidence and evolution.<n> Experiments validate the superiority of the proposed method in achieving compatibility, paving the way for more resilient and adaptable machine learning systems.
- Score: 46.45124762458626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backward compatible representation learning enables updated models to integrate seamlessly with existing ones, avoiding to reprocess stored data. Despite recent advances, existing compatibility approaches in Euclidean space neglect the uncertainty in the old embedding model and force the new model to reconstruct outdated representations regardless of their quality, thereby hindering the learning process of the new model. In this paper, we propose to switch perspectives to hyperbolic geometry, where we treat time as a natural axis for capturing a model's confidence and evolution. By lifting embeddings into hyperbolic space and constraining updated embeddings to lie within the entailment cone of the old ones, we maintain generational consistency across models while accounting for uncertainties in the representations. To further enhance compatibility, we introduce a robust contrastive alignment loss that dynamically adjusts alignment weights based on the uncertainty of the old embeddings. Experiments validate the superiority of the proposed method in achieving compatibility, paving the way for more resilient and adaptable machine learning systems.
Related papers
- Continual Learning in Vision-Language Models via Aligned Model Merging [84.47520899851557]
We present a new perspective based on model merging to maintain stability while still retaining plasticity.<n>To maximize the effectiveness of the merging process, we propose a simple mechanism that promotes learning aligned weights with previous ones.
arXiv Detail & Related papers (2025-05-30T20:52:21Z) - Prototype Perturbation for Relaxing Alignment Constraints in Backward-Compatible Learning [43.8412077229831]
We develop two approaches for calculating the perturbations: Neighbor-Driven Prototype Perturbation (NDPP) and Optimization-Driven Prototype Perturbation (ODPP)<n>Our approaches perform favorably against state-of-the-art BCL algorithms.
arXiv Detail & Related papers (2025-03-19T01:45:48Z) - Boosting Alignment for Post-Unlearning Text-to-Image Generative Models [55.82190434534429]
Large-scale generative models have shown impressive image-generation capabilities, propelled by massive data.<n>This often inadvertently leads to the generation of harmful or inappropriate content and raises copyright concerns.<n>We propose a framework that seeks an optimal model update at each unlearning iteration, ensuring monotonic improvement on both objectives.
arXiv Detail & Related papers (2024-12-09T21:36:10Z) - Backward-Compatible Aligned Representations via an Orthogonal Transformation Layer [20.96380700548786]
Visual retrieval systems face challenges when updating models with improved representations due to misalignment between the old and new representations.
Prior research has explored backward-compatible training methods that enable direct comparisons between new and old representations without backfilling.
In this paper, we address achieving a balance between backward compatibility and the performance of independently trained models.
arXiv Detail & Related papers (2024-08-16T15:05:28Z) - MixBCT: Towards Self-Adapting Backward-Compatible Training [66.52766344751635]
We propose MixBCT, a simple yet highly effective backward-compatible training method.
We conduct experiments on the large-scale face recognition datasets MS1Mv3 and IJB-C.
arXiv Detail & Related papers (2023-08-14T05:55:38Z) - Class-Incremental Mixture of Gaussians for Deep Continual Learning [15.49323098362628]
We propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework.
We show that our model can effectively learn in memory-free scenarios with fixed extractors.
arXiv Detail & Related papers (2023-07-09T04:33:19Z) - Neighborhood Consensus Contrastive Learning for Backward-Compatible
Representation [46.86784621137665]
backward-compatible representation is proposed to enable the "new" features compatible with "old"' features.
We propose a Neighborhood Consensus Contrastive Learning (NCCL) method, which learns backward-compatible representation from a neighborhood consensus perspective.
Our method ensures backward compatibility without impairing the accuracy of the new model.
arXiv Detail & Related papers (2021-08-07T05:50:47Z) - Towards Backward-Compatible Representation Learning [86.39292571306395]
We propose a way to learn visual features that are compatible with previously computed ones even when they have different dimensions.
This enables visual search systems to bypass computing new features for all previously seen images when updating the embedding models.
We propose a framework to train embedding models, called backward-compatible training (BCT), as a first step towards backward compatible representation learning.
arXiv Detail & Related papers (2020-03-26T14:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.