REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection
- URL: http://arxiv.org/abs/2301.08995v1
- Date: Sat, 21 Jan 2023 19:28:25 GMT
- Title: REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection
- Authors: Anoop Kadan, Deepak P., Manjary P. Gangan, Savitha Sam Abraham, Lajish
V. L
- Abstract summary: We propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM.
We leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention.
- Score: 3.6678641723285446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Technological advancements in web platforms allow people to express and share
emotions towards textual write-ups written and shared by others. This brings
about different interesting domains for analysis; emotion expressed by the
writer and emotion elicited from the readers. In this paper, we propose a novel
approach for Readers' Emotion Detection from short-text documents using a deep
learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is
well understood that utilizing context-specific representations from
transformer-based pre-trained language models helps achieve improved
performance. Within this affective computing task, we explore how incorporating
affective information can further enhance performance. Towards this, we
leverage context-specific and affect enriched representations by using a
transformer-based pre-trained language model in tandem with affect enriched
Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k,
besides using RENh-4k and SemEval-2007. We evaluate the performance of our
REDAffectiveLM rigorously across these datasets, against a vast set of
state-of-the-art baselines, where our model consistently outperforms baselines
and obtains statistically significant results. Our results establish that
utilizing affect enriched representation along with context-specific
representation within a neural architecture can considerably enhance readers'
emotion detection. Since the impact of affect enrichment specifically in
readers' emotion detection isn't well explored, we conduct a detailed analysis
over affect enriched Bi-LSTM+Attention using qualitative and quantitative model
behavior evaluation techniques. We observe that compared to conventional
semantic embedding, affect enriched embedding increases ability of the network
to effectively identify and assign weightage to key terms responsible for
readers' emotion detection.
Related papers
- Advancing Aspect-Based Sentiment Analysis through Deep Learning Models [4.0064131990718606]
This study introduces an innovative edge-enhanced GCN, named SentiSys, to navigate the syntactic graph while preserving intact feature information.
The experimental results demonstrate enhanced performance in aspect-based sentiment analysis with the use of SentiSys.
arXiv Detail & Related papers (2024-04-04T07:31:56Z) - ASEM: Enhancing Empathy in Chatbot through Attention-based Sentiment and
Emotion Modeling [0.0]
We present a novel solution by employing a mixture of experts, multiple encoders, to offer distinct perspectives on the emotional state of the user's utterance.
We propose an end-to-end model architecture called ASEM that performs emotion analysis on top of sentiment analysis for open-domain chatbots.
arXiv Detail & Related papers (2024-02-25T20:36:51Z) - Analysis of the Evolution of Advanced Transformer-Based Language Models:
Experiments on Opinion Mining [0.5735035463793008]
This paper studies the behaviour of the cutting-edge Transformer-based language models on opinion mining.
Our comparative study shows leads and paves the way for production engineers regarding the approach to focus on.
arXiv Detail & Related papers (2023-08-07T01:10:50Z) - Improving the Generalizability of Text-Based Emotion Detection by
Leveraging Transformers with Psycholinguistic Features [27.799032561722893]
We propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features.
We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach.
arXiv Detail & Related papers (2022-12-19T13:58:48Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Analyzing the Influence of Dataset Composition for Emotion Recognition [0.0]
We analyze the influence data collection methodology has on two multimodal emotion recognition datasets.
Experiments with the full IEMOCAP dataset indicate that the composition negatively influences generalization performance when compared to the OMG-Emotion Behavior dataset.
arXiv Detail & Related papers (2021-03-05T14:20:59Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.