Sentence Embeddings using Supervised Contrastive Learning
- URL: http://arxiv.org/abs/2106.04791v1
- Date: Wed, 9 Jun 2021 03:30:29 GMT
- Title: Sentence Embeddings using Supervised Contrastive Learning
- Authors: Danqi Liao
- Abstract summary: We propose a new method to build sentence embeddings by doing supervised contrastive learning.
Our method fine-tunes pretrained BERT on SNLI data, incorporating both supervised crossentropy loss and supervised contrastive loss.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentence embeddings encode sentences in fixed dense vectors and have played
an important role in various NLP tasks and systems. Methods for building
sentence embeddings include unsupervised learning such as Quick-Thoughts and
supervised learning such as InferSent. With the success of pretrained NLP
models, recent research shows that fine-tuning pretrained BERT on SNLI and
Multi-NLI data creates state-of-the-art sentence embeddings, outperforming
previous sentence embeddings methods on various evaluation benchmarks. In this
paper, we propose a new method to build sentence embeddings by doing supervised
contrastive learning. Specifically our method fine-tunes pretrained BERT on
SNLI data, incorporating both supervised crossentropy loss and supervised
contrastive loss. Compared with baseline where fine-tuning is only done with
supervised cross-entropy loss similar to current state-of-the-art method SBERT,
our supervised contrastive method improves 2.8% in average on Semantic Textual
Similarity (STS) benchmarks and 1.05% in average on various sentence transfer
tasks.
Related papers
- Advancing Semantic Textual Similarity Modeling: A Regression Framework with Translated ReLU and Smooth K2 Loss [3.435381469869212]
This paper presents an innovative regression framework for Sentence-BERT STS tasks.
It proposes two simple yet effective loss functions: Translated ReLU and Smooth K2 Loss.
Experimental results demonstrate that our method achieves convincing performance across seven established STS benchmarks.
arXiv Detail & Related papers (2024-06-08T02:52:43Z) - DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - DebCSE: Rethinking Unsupervised Contrastive Sentence Embedding Learning
in the Debiasing Perspective [1.351603931922027]
We argue that effectively eliminating the influence of various biases is crucial for learning high-quality sentence embeddings.
We propose a novel contrastive framework for sentence embedding, termed DebCSE, which can eliminate the impact of these biases.
arXiv Detail & Related papers (2023-09-14T02:43:34Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - A Novel Plagiarism Detection Approach Combining BERT-based Word
Embedding, Attention-based LSTMs and an Improved Differential Evolution
Algorithm [11.142354615369273]
We propose a novel method for detecting plagiarism based on attention mechanism-based long short-term memory (LSTM) and bidirectional encoder representations from transformers (BERT) word embedding.
BERT could be included in a downstream task and fine-tuned as a task-specific structure, while the trained BERT model is capable of detecting various linguistic characteristics.
arXiv Detail & Related papers (2023-05-03T18:26:47Z) - Improving Contrastive Learning of Sentence Embeddings with
Case-Augmented Positives and Retrieved Negatives [17.90820242798732]
Unsupervised contrastive learning methods still lag far behind the supervised counterparts.
We propose switch-case augmentation to flip the case of the first letter of randomly selected words in a sentence.
For negative samples, we sample hard negatives from the whole dataset based on a pre-trained language model.
arXiv Detail & Related papers (2022-06-06T09:46:12Z) - PromptBERT: Improving BERT Sentence Embeddings with Prompts [95.45347849834765]
We propose a prompt based sentence embeddings method which can reduce token embeddings biases and make the original BERT layers more effective.
We also propose a novel unsupervised training objective by the technology of template denoising, which substantially shortens the performance gap between the supervised and unsupervised setting.
Our fine-tuned method outperforms the state-of-the-art method SimCSE in both unsupervised and supervised settings.
arXiv Detail & Related papers (2022-01-12T06:54:21Z) - Phrase-level Active Learning for Neural Machine Translation [107.28450614074002]
We propose an active learning setting where we can spend a given budget on translating in-domain data.
We select both full sentences and individual phrases from unlabelled data in the new domain for routing to human translators.
In a German-English translation task, our active learning approach achieves consistent improvements over uncertainty-based sentence selection methods.
arXiv Detail & Related papers (2021-06-21T19:20:42Z) - Unsupervised Bitext Mining and Translation via Self-trained Contextual
Embeddings [51.47607125262885]
We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text.
We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training.
We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F1 scores over previous unsupervised methods.
arXiv Detail & Related papers (2020-10-15T14:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.