A Soft Contrastive Learning-based Prompt Model for Few-shot Sentiment
Analysis
- URL: http://arxiv.org/abs/2312.10479v1
- Date: Sat, 16 Dec 2023 15:17:28 GMT
- Title: A Soft Contrastive Learning-based Prompt Model for Few-shot Sentiment
Analysis
- Authors: Jingyi Zhou, Jie Zhou, Jiabao Zhao, Siyin Wang, Haijun Shan, Gui Tao,
Qi Zhang, Xuanjing Huang
- Abstract summary: We propose a Soft Contrastive learning-based Prompt model for few-shot sentiment analysis.
First, we design a sentiment-aware chain of thought prompt module to guide the model to predict the sentiment.
Then, we propose a soft contrastive learning algorithm to take the correlation of the labels into account.
- Score: 38.17825180485807
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Few-shot text classification has attracted great interest in both academia
and industry due to the lack of labeled data in many fields. Different from
general text classification (e.g., topic classification), few-shot sentiment
classification is more challenging because the semantic distances among the
classes are more subtle. For instance, the semantic distances between the
sentiment labels in a positive or negative polarity (e.g., ``love" and ``joy",
``remorse" and ``sadness") are close, while the distances are large for the
sentiment labels in two opposite polarities (e.g., ``love" and ``sadness"). To
address this problem, we propose a Soft Contrastive learning-based Prompt
(\texttt{SCP}) model for few-shot sentiment analysis. First, we design a
sentiment-aware chain of thought prompt module to guide the model to predict
the sentiment from coarse grain to fine grain via a series of intermediate
reasoning steps. Then, we propose a soft contrastive learning algorithm to take
the correlation of the labels into account. A series of experiments on several
sentiment analysis datasets show the great advantages of \texttt{SCP} by
comparing it with SOTA baselines (e.g., ChatGPT).
Related papers
- What is Sentiment Meant to Mean to Language Models? [0.0]
"sentiment" entails a wide variety of concepts depending on the domain and tools used.
"sentiment" has been used to mean emotion, opinions, market movements, or simply a general good-bad'' dimension.
arXiv Detail & Related papers (2024-05-03T19:37:37Z) - SoftMCL: Soft Momentum Contrastive Learning for Fine-grained Sentiment-aware Pre-training [8.148261580909425]
This study proposes a soft momentum contrastive learning (SoftMCL) for fine-grained sentiment-aware pre-training.
The proposed SoftMCL is conducted on both the word- and sentence-level to enhance the model's ability to learn affective information.
arXiv Detail & Related papers (2024-05-03T03:15:38Z) - Linguistic features for sentence difficulty prediction in ABSA [0.3172761915061083]
We study the impact of domain diversity and syntactic diversity on difficulty.
We employ two ways of defining sentence difficulty.
We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.
arXiv Detail & Related papers (2024-02-05T16:31:03Z) - Bridging the Gap between Model Explanations in Partially Annotated
Multi-label Classification [85.76130799062379]
We study how false negative labels affect the model's explanation.
We propose to boost the attribution scores of the model trained with partial labels to make its explanation resemble that of the model trained with full labels.
arXiv Detail & Related papers (2023-04-04T14:00:59Z) - Label-enhanced Prototypical Network with Contrastive Learning for
Multi-label Few-shot Aspect Category Detection [17.228616743739412]
Multi-label aspect category detection allows a given review sentence to contain multiple aspect categories.
We propose a novel label-enhanced network (LPN) for multi-label few-shot aspect category detection.
arXiv Detail & Related papers (2022-06-14T02:37:44Z) - Not All Negatives are Equal: Label-Aware Contrastive Loss for
Fine-grained Text Classification [0.0]
We analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks.
We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives.
We find that Label-aware Contrastive Loss outperforms previous contrastive methods.
arXiv Detail & Related papers (2021-09-12T04:19:17Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.