ContrastNet: A Contrastive Learning Framework for Few-Shot Text
Classification
- URL: http://arxiv.org/abs/2305.09269v1
- Date: Tue, 16 May 2023 08:22:17 GMT
- Title: ContrastNet: A Contrastive Learning Framework for Few-Shot Text
Classification
- Authors: Junfan Chen, Richong Zhang, Yongyi Mao, Jie Xu
- Abstract summary: We propose ContrastNet to tackle both discriminative representation and overfitting problems in few-shot text classification.
Experiments on 8 few-shot text classification datasets show that ContrastNet outperforms the current state-of-the-art models.
- Score: 40.808421462004866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot text classification has recently been promoted by the meta-learning
paradigm which aims to identify target classes with knowledge transferred from
source classes with sets of small tasks named episodes. Despite their success,
existing works building their meta-learner based on Prototypical Networks are
unsatisfactory in learning discriminative text representations between similar
classes, which may lead to contradictions during label prediction. In addition,
the tasklevel and instance-level overfitting problems in few-shot text
classification caused by a few training examples are not sufficiently tackled.
In this work, we propose a contrastive learning framework named ContrastNet to
tackle both discriminative representation and overfitting problems in few-shot
text classification. ContrastNet learns to pull closer text representations
belonging to the same class and push away text representations belonging to
different classes, while simultaneously introducing unsupervised contrastive
regularization at both task-level and instance-level to prevent overfitting.
Experiments on 8 few-shot text classification datasets show that ContrastNet
outperforms the current state-of-the-art models.
Related papers
- Class-Aware Contrastive Optimization for Imbalanced Text Classification [19.537124894139833]
We show that leveraging class-aware contrastive optimization combined with denoising autoencoders can successfully tackle imbalanced text classification tasks.
Our proposal demonstrates a notable increase in performance across a wide variety of text datasets.
arXiv Detail & Related papers (2024-10-29T16:34:08Z) - DualCoOp++: Fast and Effective Adaptation to Multi-Label Recognition
with Limited Annotations [79.433122872973]
Multi-label image recognition in the low-label regime is a task of great challenge and practical significance.
We leverage the powerful alignment between textual and visual features pretrained with millions of auxiliary image-text pairs.
We introduce an efficient and effective framework called Evidence-guided Dual Context Optimization (DualCoOp++)
arXiv Detail & Related papers (2023-08-03T17:33:20Z) - An Effective Deployment of Contrastive Learning in Multi-label Text
Classification [6.697876965452054]
We propose five novel contrastive losses for multi-label text classification tasks.
These are Strict Contrastive Loss (SCL), Intra-label Contrastive Loss (ICL), Jaccard Similarity Contrastive Loss (JSCL), Jaccard Similarity Probability Contrastive Loss (JSPCL) and Stepwise Label Contrastive Loss (SLCL)
arXiv Detail & Related papers (2022-12-01T15:00:16Z) - Keywords and Instances: A Hierarchical Contrastive Learning Framework
Unifying Hybrid Granularities for Text Generation [59.01297461453444]
We propose a hierarchical contrastive learning mechanism, which can unify hybrid granularities semantic meaning in the input text.
Experiments demonstrate that our model outperforms competitive baselines on paraphrasing, dialogue generation, and storytelling tasks.
arXiv Detail & Related papers (2022-05-26T13:26:03Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - CLLD: Contrastive Learning with Label Distance for Text Classificatioin [0.6299766708197883]
We propose Contrastive Learning with Label Distance (CLLD) for learning contrastive classes.
CLLD ensures the flexibility within the subtle differences that lead to different label assignments.
Our experiments suggest that the learned label distance relieve the adversarial nature of interclasses.
arXiv Detail & Related papers (2021-10-25T07:07:14Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.