Supervised Adversarial Contrastive Learning for Emotion Recognition in
Conversations
- URL: http://arxiv.org/abs/2306.01505v2
- Date: Sun, 9 Jul 2023 23:27:27 GMT
- Title: Supervised Adversarial Contrastive Learning for Emotion Recognition in
Conversations
- Authors: Dou Hu, Yinan Bao, Lingwei Wei, Wei Zhou, Songlin Hu
- Abstract summary: We propose a framework for learning class-spread structured representations in a supervised manner.
It can effectively utilize label-level feature consistency and retain fine-grained intra-class features.
Under the framework with CAT, we develop a sequence-based SACL-LSTM to learn label-consistent and context-robust features.
- Score: 24.542445315345464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extracting generalized and robust representations is a major challenge in
emotion recognition in conversations (ERC). To address this, we propose a
supervised adversarial contrastive learning (SACL) framework for learning
class-spread structured representations in a supervised manner. SACL applies
contrast-aware adversarial training to generate worst-case samples and uses
joint class-spread contrastive learning to extract structured representations.
It can effectively utilize label-level feature consistency and retain
fine-grained intra-class features. To avoid the negative impact of adversarial
perturbations on context-dependent data, we design a contextual adversarial
training (CAT) strategy to learn more diverse features from context and enhance
the model's context robustness. Under the framework with CAT, we develop a
sequence-based SACL-LSTM to learn label-consistent and context-robust features
for ERC. Experiments on three datasets show that SACL-LSTM achieves
state-of-the-art performance on ERC. Extended experiments prove the
effectiveness of SACL and CAT.
Related papers
- Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - On the Effectiveness of Supervision in Asymmetric Non-Contrastive Learning [5.123232962822044]
asymmetric non-contrastive learning (ANCL) often outperforms its contrastive learning counterpart in self-supervised representation learning.
We study ANCL for supervised representation learning, coined SupSiam and SupBYOL, leveraging labels in ANCL to achieve better representations.
Our analysis reveals that providing supervision to ANCL reduces intra-class variance, and the contribution of supervision should be adjusted to achieve the best performance.
arXiv Detail & Related papers (2024-06-16T06:43:15Z) - Data Poisoning for In-context Learning [49.77204165250528]
In-context learning (ICL) has been recognized for its innovative ability to adapt to new tasks.
This paper delves into the critical issue of ICL's susceptibility to data poisoning attacks.
We introduce ICLPoison, a specialized attacking framework conceived to exploit the learning mechanisms of ICL.
arXiv Detail & Related papers (2024-02-03T14:20:20Z) - SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning
Framework for Emotion Recognition in Conversations [20.856739541819056]
Emotion recognition in conversations (ERC) is a rapidly evolving task within the natural language processing community.
We propose an efficient and model-agnostic SCL framework named Supervised Sample-Label Contrastive Learning with Soft-HGR Maximal Correlation (SSLCL)
We introduce a novel perspective on utilizing label representations by projecting discrete labels into dense embeddings through a shallow multilayer perceptron.
arXiv Detail & Related papers (2023-10-25T14:41:14Z) - Improving Representation Learning for Histopathologic Images with
Cluster Constraints [31.426157660880673]
Self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative.
We introduce an SSL framework for transferable representation learning and semantically meaningful clustering.
Our approach outperforms common SSL methods in downstream classification and clustering tasks.
arXiv Detail & Related papers (2023-10-18T21:20:44Z) - Cluster-Level Contrastive Learning for Emotion Recognition in
Conversations [13.570186295041644]
Key challenge for Emotion Recognition in Conversations (ERC) is to distinguish semantically similar emotions.
Some works utilise Supervised Contrastive Learning (SCL) which uses categorical emotion labels as supervision signals and contrasts in high-dimensional semantic space.
We propose a novel low-dimensional Supervised Cluster-level Contrastive Learning ( SCCL) method, which first reduces the high-dimensional SCL space to a three-dimensional affect representation space.
arXiv Detail & Related papers (2023-02-07T14:49:20Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly
Supervised Relation Extraction [24.853265244512954]
We propose a hierarchical contrastive learning Framework for DistantlySupervised relation extraction (HiCLRE) to reduce noisy sentences.
Specifically, we propose a three-level hierarchical learning framework to interact with cross levels, generating the de-noising context-aware representations.
Experiments demonstrate that HiCLRE significantly outperforms strong baselines in various mainstream DSRE datasets.
arXiv Detail & Related papers (2022-02-27T12:48:26Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.