Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE
- URL: http://arxiv.org/abs/2310.06918v2
- Date: Fri, 20 Oct 2023 19:39:40 GMT
- Title: Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE
- Authors: Pengyue Hou, Xingyu Li
- Abstract summary: This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining.
The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives.
- Score: 13.494159547236425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent success of SimCSE has greatly advanced state-of-the-art sentence
representations. However, the original formulation of SimCSE does not fully
exploit the potential of hard negative samples in contrastive learning. This
study introduces an unsupervised contrastive learning framework that combines
SimCSE with hard negative mining, aiming to enhance the quality of sentence
embeddings. The proposed focal-InfoNCE function introduces self-paced
modulation terms in the contrastive objective, downweighting the loss
associated with easy negatives and encouraging the model focusing on hard
negatives. Experimentation on various STS benchmarks shows that our method
improves sentence embeddings in terms of Spearman's correlation and
representation alignment and uniformity.
Related papers
- KDMCSE: Knowledge Distillation Multimodal Sentence Embeddings with Adaptive Angular margin Contrastive Learning [31.139620652818838]
We propose KDMCSE, a novel approach that enhances the discrimination and generalizability of multimodal representation.
We also introduce a new contrastive objective, AdapACSE, that enhances the discriminative representation by strengthening the margin within the angular space.
arXiv Detail & Related papers (2024-03-26T08:32:39Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Sparse Contrastive Learning of Sentence Embeddings [10.251604958122506]
SimCSE has shown the feasibility of contrastive learning in training sentence embeddings.
Prior studies have shown that dense models could contain harmful parameters that affect the model performance.
We propose parameter sparsification, where alignment and uniformity scores are used to measure the contribution of each parameter to the overall quality of sentence embeddings.
arXiv Detail & Related papers (2023-11-07T10:54:45Z) - DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning
in the Debiasing Perspective [1.351603931922027]
We argue that effectively eliminating the influence of various biases is crucial for learning high-quality sentence embeddings.
We propose a novel contrastive framework for sentence embedding, termed DebCSE, which can eliminate the impact of these biases.
arXiv Detail & Related papers (2023-09-14T02:43:34Z) - Identical and Fraternal Twins: Fine-Grained Semantic Contrastive
Learning of Sentence Representations [6.265789210037749]
We introduce a novel Identical and Fraternal Twins of Contrastive Learning framework, capable of simultaneously adapting to various positive pairs generated by different augmentation techniques.
We also present proof-of-concept experiments combined with the contrastive objective to prove the validity of the proposed Twins Loss.
arXiv Detail & Related papers (2023-07-20T15:02:42Z) - Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning [62.40718385934608]
We introduce a novel Pseudo-Negative Regularization (PNR) framework for effective continual self-supervised learning (CSSL)
Our PNR leverages pseudo-negatives obtained through model-based augmentation in a way that newly learned representations may not contradict what has been learned in the past.
arXiv Detail & Related papers (2023-06-08T10:59:35Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Improving Contrastive Learning of Sentence Embeddings with
Case-Augmented Positives and Retrieved Negatives [17.90820242798732]
Unsupervised contrastive learning methods still lag far behind the supervised counterparts.
We propose switch-case augmentation to flip the case of the first letter of randomly selected words in a sentence.
For negative samples, we sample hard negatives from the whole dataset based on a pre-trained language model.
arXiv Detail & Related papers (2022-06-06T09:46:12Z) - DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings [51.274478128525686]
DiffCSE is an unsupervised contrastive learning framework for learning sentence embeddings.
Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods.
arXiv Detail & Related papers (2022-04-21T17:32:01Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.