Instance Smoothed Contrastive Learning for Unsupervised Sentence
Embedding
- URL: http://arxiv.org/abs/2305.07424v2
- Date: Thu, 18 May 2023 08:31:51 GMT
- Title: Instance Smoothed Contrastive Learning for Unsupervised Sentence
Embedding
- Authors: Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang
- Abstract summary: We propose IS-CSE (instance smoothing contrastive sentence embedding) to smooth the boundaries of embeddings in the feature space.
We evaluate our method on standard semantic text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%, and 79.42% Spearman's correlation.
- Score: 16.598732694215137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning-based methods, such as unsup-SimCSE, have achieved
state-of-the-art (SOTA) performances in learning unsupervised sentence
embeddings. However, in previous studies, each embedding used for contrastive
learning only derived from one sentence instance, and we call these embeddings
instance-level embeddings. In other words, each embedding is regarded as a
unique class of its own, whichmay hurt the generalization performance. In this
study, we propose IS-CSE (instance smoothing contrastive sentence embedding) to
smooth the boundaries of embeddings in the feature space. Specifically, we
retrieve embeddings from a dynamic memory buffer according to the semantic
similarity to get a positive embedding group. Then embeddings in the group are
aggregated by a self-attention operation to produce a smoothed instance
embedding for further analysis. We evaluate our method on standard semantic
text similarity (STS) tasks and achieve an average of 78.30%, 79.47%, 77.73%,
and 79.42% Spearman's correlation on the base of BERT-base, BERT-large,
RoBERTa-base, and RoBERTa-large respectively, a 2.05%, 1.06%, 1.16% and 0.52%
improvement compared to unsup-SimCSE.
Related papers
- Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated ReLU and Smooth K2 Loss [3.435381469869212]
This paper presents an innovative regression framework for Sentence-BERT STS tasks.
It proposes two simple yet effective loss functions: Translated ReLU and Smooth K2 Loss.
Experimental results demonstrate that our method achieves convincing performance across seven established STS benchmarks.
arXiv Detail & Related papers (2024-06-08T02:52:43Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning
in the Debiasing Perspective [1.351603931922027]
We argue that effectively eliminating the influence of various biases is crucial for learning high-quality sentence embeddings.
We propose a novel contrastive framework for sentence embedding, termed DebCSE, which can eliminate the impact of these biases.
arXiv Detail & Related papers (2023-09-14T02:43:34Z) - Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided
Relation Alignment and Adaptation [98.51938442785179]
Incremental few-shot semantic segmentation aims to incrementally extend a semantic segmentation model to novel classes.
This task faces a severe semantic-aliasing issue between base and novel classes due to data imbalance.
We propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully considers the guidance of prior semantic information.
arXiv Detail & Related papers (2023-05-18T10:40:52Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - InfoCSE: Information-aggregated Contrastive Learning of Sentence
Embeddings [61.77760317554826]
This paper proposes an information-d contrastive learning framework for learning unsupervised sentence embeddings, termed InfoCSE.
We evaluate the proposed InfoCSE on several benchmark datasets w.r.t the semantic text similarity (STS) task.
Experimental results show that InfoCSE outperforms SimCSE by an average Spearman correlation of 2.60% on BERT-base, and 1.77% on BERT-large.
arXiv Detail & Related papers (2022-10-08T15:53:19Z) - Improving Contrastive Learning of Sentence Embeddings with
Case-Augmented Positives and Retrieved Negatives [17.90820242798732]
Unsupervised contrastive learning methods still lag far behind the supervised counterparts.
We propose switch-case augmentation to flip the case of the first letter of randomly selected words in a sentence.
For negative samples, we sample hard negatives from the whole dataset based on a pre-trained language model.
arXiv Detail & Related papers (2022-06-06T09:46:12Z) - SimCSE: Simple Contrastive Learning of Sentence Embeddings [10.33373737281907]
This paper presents SimCSE, a contrastive learning framework for embeddings.
We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective.
We then incorporate annotated pairs from NLI datasets into contrastive learning by using "entailment" pairs as positives and "contradiction" pairs as hard negatives.
arXiv Detail & Related papers (2021-04-18T11:27:08Z) - On the Sentence Embeddings from Pre-trained Language Models [78.45172445684126]
In this paper, we argue that the semantic information in the BERT embeddings is not fully exploited.
We find that BERT always induces a non-smooth anisotropic semantic space of sentences, which harms its performance of semantic similarity.
We propose to transform the anisotropic sentence embedding distribution to a smooth and isotropic Gaussian distribution through normalizing flows that are learned with an unsupervised objective.
arXiv Detail & Related papers (2020-11-02T13:14:57Z) - Contrastive Clustering [57.71729650297379]
We propose Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.
In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline.
arXiv Detail & Related papers (2020-09-21T08:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.