CLLD: Contrastive Learning with Label Distance for Text Classificatioin
- URL: http://arxiv.org/abs/2110.13656v2
- Date: Thu, 28 Oct 2021 09:42:01 GMT
- Title: CLLD: Contrastive Learning with Label Distance for Text Classificatioin
- Authors: Jinhe Lan, Qingyuan Zhan, Chenhao Jiang, Kunping Yuan, Desheng Wang
- Abstract summary: We propose Contrastive Learning with Label Distance (CLLD) for learning contrastive classes.
CLLD ensures the flexibility within the subtle differences that lead to different label assignments.
Our experiments suggest that the learned label distance relieve the adversarial nature of interclasses.
- Score: 0.6299766708197883
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Existed pre-trained models have achieved state-of-the-art performance on
various text classification tasks. These models have proven to be useful in
learning universal language representations. However, the semantic discrepancy
between similar texts cannot be effectively distinguished by advanced
pre-trained models, which have a great influence on the performance of
hard-to-distinguish classes. To address this problem, we propose a novel
Contrastive Learning with Label Distance (CLLD) in this work. Inspired by
recent advances in contrastive learning, we specifically design a
classification method with label distance for learning contrastive classes.
CLLD ensures the flexibility within the subtle differences that lead to
different label assignments, and generates the distinct representations for
each class having similarity simultaneously. Extensive experiments on public
benchmarks and internal datasets demonstrate that our method improves the
performance of pre-trained models on classification tasks. Importantly, our
experiments suggest that the learned label distance relieve the adversarial
nature of interclasses.
Related papers
- Enhancing Visual Classification using Comparative Descriptors [13.094102298155736]
We introduce a novel concept of comparative descriptors.
These descriptors emphasize the unique features of a target class against its most similar classes, enhancing differentiation.
An additional filtering process ensures that these descriptors are closer to the image embeddings in the CLIP space.
arXiv Detail & Related papers (2024-11-08T06:28:02Z) - Simple-Sampling and Hard-Mixup with Prototypes to Rebalance Contrastive Learning for Text Classification [11.072083437769093]
We propose a novel model named SharpReCL for imbalanced text classification tasks.
Our model even outperforms popular large language models across several datasets.
arXiv Detail & Related papers (2024-05-19T11:33:49Z) - Improving Self-training for Cross-lingual Named Entity Recognition with
Contrastive and Prototype Learning [80.08139343603956]
In cross-lingual named entity recognition, self-training is commonly used to bridge the linguistic gap.
In this work, we aim to improve self-training for cross-lingual NER by combining representation learning and pseudo label refinement.
Our proposed method, namely ContProto mainly comprises two components: (1) contrastive self-training and (2) prototype-based pseudo-labeling.
arXiv Detail & Related papers (2023-05-23T02:52:16Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic
Segmentation [40.09476732999614]
Mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
The use of a background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels.
This paper proposes novel class enhancement losses to bypass the use of the background embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores.
arXiv Detail & Related papers (2023-01-18T06:55:02Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Not All Negatives are Equal: Label-Aware Contrastive Loss for
Fine-grained Text Classification [0.0]
We analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks.
We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives.
We find that Label-aware Contrastive Loss outperforms previous contrastive methods.
arXiv Detail & Related papers (2021-09-12T04:19:17Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text
Classification [52.69730591919885]
We present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations.
We observe significant gains in effectiveness on document and intent classification for a diverse set of languages.
arXiv Detail & Related papers (2020-07-29T19:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.