Contrastive Learning of Sentence Embeddings from Scratch
- URL: http://arxiv.org/abs/2305.15077v2
- Date: Tue, 24 Oct 2023 09:56:46 GMT
- Title: Contrastive Learning of Sentence Embeddings from Scratch
- Authors: Junlei Zhang, Zhenzhong Lan, Junxian He
- Abstract summary: We present SynCSE, a contrastive learning framework that trains sentence embeddings with synthesized data.
Specifically, we explore utilizing large language models to synthesize the required data samples for contrastive learning.
Experimental results on sentence similarity and reranking tasks indicate that both SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines.
- Score: 26.002876719243464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning has been the dominant approach to train state-of-the-art
sentence embeddings. Previous studies have typically learned sentence
embeddings either through the use of human-annotated natural language inference
(NLI) data or via large-scale unlabeled sentences in an unsupervised manner.
However, even in the case of unlabeled data, their acquisition presents
challenges in certain domains due to various reasons. To address these issues,
we present SynCSE, a contrastive learning framework that trains sentence
embeddings with synthesized data. Specifically, we explore utilizing large
language models to synthesize the required data samples for contrastive
learning, including (1) producing positive and negative annotations given
unlabeled sentences (SynCSE-partial), and (2) generating sentences along with
their corresponding annotations from scratch (SynCSE-scratch). Experimental
results on sentence similarity and reranking tasks indicate that both
SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines,
and SynCSE-partial even achieves comparable performance to the supervised
models in most settings.
Related papers
- DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE [13.494159547236425]
This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining.
The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives.
arXiv Detail & Related papers (2023-10-10T18:15:24Z) - DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning
in the Debiasing Perspective [1.351603931922027]
We argue that effectively eliminating the influence of various biases is crucial for learning high-quality sentence embeddings.
We propose a novel contrastive framework for sentence embedding, termed DebCSE, which can eliminate the impact of these biases.
arXiv Detail & Related papers (2023-09-14T02:43:34Z) - Generate, Discriminate and Contrast: A Semi-Supervised Sentence
Representation Learning Framework [68.04940365847543]
We propose a semi-supervised sentence embedding framework, GenSE, that effectively leverages large-scale unlabeled data.
Our method include three parts: 1) Generate: A generator/discriminator model is jointly trained to synthesize sentence pairs from open-domain unlabeled corpus; 2) Discriminate: Noisy sentence pairs are filtered out by the discriminator to acquire high-quality positive and negative sentence pairs; 3) Contrast: A prompt-based contrastive approach is presented for sentence representation learning with both annotated and synthesized data.
arXiv Detail & Related papers (2022-10-30T10:15:21Z) - InfoCSE: Information-aggregated Contrastive Learning of Sentence
Embeddings [61.77760317554826]
This paper proposes an information-d contrastive learning framework for learning unsupervised sentence embeddings, termed InfoCSE.
We evaluate the proposed InfoCSE on several benchmark datasets w.r.t the semantic text similarity (STS) task.
Experimental results show that InfoCSE outperforms SimCSE by an average Spearman correlation of 2.60% on BERT-base, and 1.77% on BERT-large.
arXiv Detail & Related papers (2022-10-08T15:53:19Z) - Improving Contrastive Learning of Sentence Embeddings with
Case-Augmented Positives and Retrieved Negatives [17.90820242798732]
Unsupervised contrastive learning methods still lag far behind the supervised counterparts.
We propose switch-case augmentation to flip the case of the first letter of randomly selected words in a sentence.
For negative samples, we sample hard negatives from the whole dataset based on a pre-trained language model.
arXiv Detail & Related papers (2022-06-06T09:46:12Z) - DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings [51.274478128525686]
DiffCSE is an unsupervised contrastive learning framework for learning sentence embeddings.
Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods.
arXiv Detail & Related papers (2022-04-21T17:32:01Z) - Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on
a Syntactic Task [70.29624135819884]
We study the extent to which BERT is able to perform lexically-independent subject-verb number agreement (NA) on targeted syntactic templates.
Our results on nonce sentences suggest that the model generalizes well for simple templates, but fails to perform lexically-independent syntactic generalization when as little as one attractor is present.
arXiv Detail & Related papers (2022-04-14T11:33:15Z) - SimCSE: Simple Contrastive Learning of Sentence Embeddings [10.33373737281907]
This paper presents SimCSE, a contrastive learning framework for embeddings.
We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective.
We then incorporate annotated pairs from NLI datasets into contrastive learning by using "entailment" pairs as positives and "contradiction" pairs as hard negatives.
arXiv Detail & Related papers (2021-04-18T11:27:08Z) - Syntactic Structure Distillation Pretraining For Bidirectional Encoders [49.483357228441434]
We introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining.
We distill the approximate marginal distribution over words in context from the syntactic LM.
Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data.
arXiv Detail & Related papers (2020-05-27T16:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.