Improving the Generalizability of Text-Based Emotion Detection by
Leveraging Transformers with Psycholinguistic Features
- URL: http://arxiv.org/abs/2212.09465v1
- Date: Mon, 19 Dec 2022 13:58:48 GMT
- Title: Improving the Generalizability of Text-Based Emotion Detection by
Leveraging Transformers with Psycholinguistic Features
- Authors: Sourabh Zanwar, Daniel Wiechmann, Yu Qiao, Elma Kerz
- Abstract summary: We propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features.
We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach.
- Score: 27.799032561722893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been increased interest in building predictive
models that harness natural language processing and machine learning techniques
to detect emotions from various text sources, including social media posts,
micro-blogs or news articles. Yet, deployment of such models in real-world
sentiment and emotion applications faces challenges, in particular poor
out-of-domain generalizability. This is likely due to domain-specific
differences (e.g., topics, communicative goals, and annotation schemes) that
make transfer between different models of emotion recognition difficult. In
this work we propose approaches for text-based emotion detection that leverage
transformer models (BERT and RoBERTa) in combination with Bidirectional Long
Short-Term Memory (BiLSTM) networks trained on a comprehensive set of
psycholinguistic features. First, we evaluate the performance of our models
within-domain on two benchmark datasets: GoEmotion and ISEAR. Second, we
conduct transfer learning experiments on six datasets from the Unified Emotion
Dataset to evaluate their out-of-domain robustness. We find that the proposed
hybrid models improve the ability to generalize to out-of-distribution data
compared to a standard transformer-based approach. Moreover, we observe that
these models perform competitively on in-domain data.
Related papers
- Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - Evaluating the Efficacy of AI Techniques in Textual Anonymization: A Comparative Study [5.962542204378336]
This research focuses on text anonymisation methods, focusing on Conditional Random Fields (CRF), Long Short-Term Memory (LSTM), Embeddings from Language Models (ELMo) and Transformers architecture.
Preliminary results indicate that CRF, LSTM, and ELMo individually outperform traditional methods.
arXiv Detail & Related papers (2024-05-09T11:29:25Z) - ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and
Emotion Modeling [0.0]
We present a novel solution by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance.
We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots.
arXiv Detail & Related papers (2024-02-25T20:36:51Z) - Cross-Language Speech Emotion Recognition Using Multimodal Dual
Attention Transformers [5.538923337818467]
State-of-the-art systems are unable to achieve improved performance in cross-language settings.
We propose a Multimodal Dual Attention Transformer model to improve cross-language SER.
arXiv Detail & Related papers (2023-06-23T22:38:32Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Improving Generation and Evaluation of Visual Stories via Semantic
Consistency [72.00815192668193]
Given a series of natural language captions, an agent must generate a sequence of images that correspond to the captions.
Prior work has introduced recurrent generative models which outperform synthesis text-to-image models on this task.
We present a number of improvements to prior modeling approaches, including the addition of a dual learning framework.
arXiv Detail & Related papers (2021-05-20T20:42:42Z) - Modulated Fusion using Transformer for Linguistic-Acoustic Emotion
Recognition [7.799182201815763]
This paper aims to bring a new lightweight yet powerful solution for the task of Emotion Recognition and Sentiment Analysis.
Our motivation is to propose two architectures based on Transformers and modulation that combine the linguistic and acoustic inputs from a wide range of datasets to challenge, and sometimes surpass, the state-of-the-art in the field.
arXiv Detail & Related papers (2020-10-05T14:46:20Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.