TNCSE: Tensor's Norm Constraints for Unsupervised Contrastive Learning of Sentence Embeddings
- URL: http://arxiv.org/abs/2503.12739v1
- Date: Mon, 17 Mar 2025 02:14:42 GMT
- Title: TNCSE: Tensor's Norm Constraints for Unsupervised Contrastive Learning of Sentence Embeddings
- Authors: Tianyu Zong, Bingkang Shi, Hongzhu Yi, Jungang Xu,
- Abstract summary: We propose a new Sentence Embedding representation framework, TNCSE.<n>We evaluate seven semantic text similarity tasks, and the results show that TNCSE and derived models are the current state-of-the-art approach.
- Score: 4.62170384991303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised sentence embedding representation has become a hot research topic in natural language processing. As a tensor, sentence embedding has two critical properties: direction and norm. Existing works have been limited to constraining only the orientation of the samples' representations while ignoring the features of their module lengths. To address this issue, we propose a new training objective that optimizes the training of unsupervised contrastive learning by constraining the module length features between positive samples. We combine the training objective of Tensor's Norm Constraints with ensemble learning to propose a new Sentence Embedding representation framework, TNCSE. We evaluate seven semantic text similarity tasks, and the results show that TNCSE and derived models are the current state-of-the-art approach; in addition, we conduct extensive zero-shot evaluations, and the results show that TNCSE outperforms other baselines.
Related papers
- Tuning-Free Personalized Alignment via Trial-Error-Explain In-Context Learning [74.56097953187994]
We present Trial-Error-Explain In-Context Learning (TICL), a tuning-free method that personalizes language models for text generation tasks.<n>TICL iteratively expands an in-context learning prompt via a trial-error-explain process, adding model-generated negative samples and explanations.<n>TICL achieves up to 91.5% against the previous state-of-the-art and outperforms competitive tuning-free baselines for personalized alignment tasks.
arXiv Detail & Related papers (2025-02-13T05:20:21Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - CoT-BERT: Enhancing Unsupervised Sentence Representation through Chain-of-Thought [3.0566617373924325]
This paper presents CoT-BERT, an innovative method that harnesses the progressive thinking of Chain-of-supervised reasoning.
We develop an advanced contrastive learning loss function and propose a novel template denoising strategy.
arXiv Detail & Related papers (2023-09-20T08:42:06Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Sentence Representation Learning with Generative Objective rather than
Contrastive Objective [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning achieves powerful enough performance improvement and outperforms the current state-of-the-art contrastive methods.
arXiv Detail & Related papers (2022-10-16T07:47:46Z) - Supporting Context Monotonicity Abstractions in Neural NLI Models [2.624902795082451]
In certain NLI problems, the entailment label depends only on the context monotonicity and the relation between the substituted concepts.
We introduce a sound and complete simplified monotonicity logic formalism which describes our treatment of contexts as abstract units.
Using the notions in our formalism, we adapt targeted challenge sets to investigate whether an intermediate context monotonicity classification task can aid NLI models' performance.
arXiv Detail & Related papers (2021-05-17T16:43:43Z) - SimCSE: Simple Contrastive Learning of Sentence Embeddings [10.33373737281907]
This paper presents SimCSE, a contrastive learning framework for embeddings.
We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective.
We then incorporate annotated pairs from NLI datasets into contrastive learning by using "entailment" pairs as positives and "contradiction" pairs as hard negatives.
arXiv Detail & Related papers (2021-04-18T11:27:08Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z) - Self-Supervised Contrastive Learning for Unsupervised Phoneme
Segmentation [37.054709598792165]
The model is a convolutional neural network that operates directly on the raw waveform.
It is optimized to identify spectral changes in the signal using the Noise-Contrastive Estimation principle.
At test time, a peak detection algorithm is applied over the model outputs to produce the final boundaries.
arXiv Detail & Related papers (2020-07-27T12:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.