eMargin: Revisiting Contrastive Learning with Margin-Based Separation
- URL: http://arxiv.org/abs/2507.14828v1
- Date: Sun, 20 Jul 2025 05:39:25 GMT
- Title: eMargin: Revisiting Contrastive Learning with Margin-Based Separation
- Authors: Abdul-Kazeem Shamba, Kerstin Bach, Gavin Taylor,
- Abstract summary: We investigate the effect of introducing an adaptive margin into the contrastive loss function for time series representation learning.<n>Our study evaluates the impact of this modification on clustering performance and classification in three benchmark datasets.
- Score: 6.086030037869592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit previous contrastive learning frameworks to investigate the effect of introducing an adaptive margin into the contrastive loss function for time series representation learning. Specifically, we explore whether an adaptive margin (eMargin), adjusted based on a predefined similarity threshold, can improve the separation between adjacent but dissimilar time steps and subsequently lead to better performance in downstream tasks. Our study evaluates the impact of this modification on clustering performance and classification in three benchmark datasets. Our findings, however, indicate that achieving high scores on unsupervised clustering metrics does not necessarily imply that the learned embeddings are meaningful or effective in downstream tasks. To be specific, eMargin added to InfoNCE consistently outperforms state-of-the-art baselines in unsupervised clustering metrics, but struggles to achieve competitive results in downstream classification with linear probing. The source code is publicly available at https://github.com/sfi-norwai/eMargin.
Related papers
- ScoreMix: Improving Face Recognition via Score Composition in Diffusion Generators [0.0]
We propose ScoreMix, a novel yet simple data augmentation strategy to enhance discriminator performance.<n>We generate challenging synthetic samples that significantly improve discriminative capabilities in all studied benchmarks.
arXiv Detail & Related papers (2025-06-11T23:06:44Z) - Benign Overfitting and the Geometry of the Ridge Regression Solution in Binary Classification [75.01389991485098]
We show that ridge regression has qualitatively different behavior depending on the scale of the cluster mean vector.<n>In regimes where the scale is very large, the conditions that allow for benign overfitting turn out to be the same as those for the regression task.
arXiv Detail & Related papers (2025-03-11T01:45:42Z) - Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning [51.177789437682954]
Class-incremental learning (CIL) seeks to enable a model to sequentially learn new classes while retaining knowledge of previously learned ones.<n> Balancing flexibility and stability remains a significant challenge, particularly when the task ID is unknown.<n>We propose a novel semantic drift calibration method that incorporates mean shift compensation and covariance calibration.
arXiv Detail & Related papers (2025-02-11T13:57:30Z) - Rank Supervised Contrastive Learning for Time Series Classification [17.302643963704643]
We present Rank Supervised Contrastive Learning (RankSCL) to perform time series classification.
RankSCL augments raw data in a targeted way in the embedding space.
A novel rank loss is developed to assign different weights for different levels of positive samples.
arXiv Detail & Related papers (2024-01-31T18:29:10Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - JointMatch: A Unified Approach for Diverse and Collaborative
Pseudo-Labeling to Semi-Supervised Text Classification [65.268245109828]
Semi-supervised text classification (SSTC) has gained increasing attention due to its ability to leverage unlabeled data.
Existing approaches based on pseudo-labeling suffer from the issues of pseudo-label bias and error accumulation.
We propose JointMatch, a holistic approach for SSTC that addresses these challenges by unifying ideas from recent semi-supervised learning.
arXiv Detail & Related papers (2023-10-23T05:43:35Z) - Bridging the Gap: Learning Pace Synchronization for Open-World Semi-Supervised Learning [44.91863420044712]
In open-world semi-supervised learning, a machine learning model is tasked with uncovering novel categories from unlabeled data.
We introduce 1) the adaptive synchronizing marginal loss which imposes class-specific negative margins to alleviate the model bias towards seen classes, and 2) the pseudo-label contrastive clustering which exploits pseudo-labels predicted by the model to group unlabeled data from the same category together.
Our method balances the learning pace between seen and novel classes, achieving a remarkable 3% average accuracy increase on the ImageNet dataset.
arXiv Detail & Related papers (2023-09-21T09:44:39Z) - Center Contrastive Loss for Metric Learning [8.433000039153407]
We propose a novel metric learning function called Center Contrastive Loss.
It maintains a class-wise center bank and compares the category centers with the query data points using a contrastive loss.
The proposed loss combines the advantages of both contrastive and classification methods.
arXiv Detail & Related papers (2023-08-01T11:22:51Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Contrastive Test-Time Adaptation [83.73506803142693]
We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning.
We produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space.
Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks.
arXiv Detail & Related papers (2022-04-21T19:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.