CEFER: A Four Facets Framework based on Context and Emotion embedded
features for Implicit and Explicit Emotion Recognition
- URL: http://arxiv.org/abs/2209.13999v1
- Date: Wed, 28 Sep 2022 11:16:32 GMT
- Title: CEFER: A Four Facets Framework based on Context and Emotion embedded
features for Implicit and Explicit Emotion Recognition
- Authors: Fereshteh Khoshnam, Ahmad Baraani-Dastjerdi, M.J. Liaghatdar
- Abstract summary: We propose a framework that analyses text at both the sentence and word levels.
We name it CEFER (Context and Emotion embedded Framework for Emotion Recognition)
CEFER combines the emotional vector of each word, including explicit and implicit emotions, with the feature vector of each word based on context.
- Score: 2.5137859989323537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People's conduct and reactions are driven by their emotions. Online social
media is becoming a great instrument for expressing emotions in written form.
Paying attention to the context and the entire sentence help us to detect
emotion from texts. However, this perspective inhibits us from noticing some
emotional words or phrases in the text, particularly when the words express an
emotion implicitly rather than explicitly. On the other hand, focusing only on
the words and ignoring the context results in a distorted understanding of the
sentence meaning and feeling. In this paper, we propose a framework that
analyses text at both the sentence and word levels. We name it CEFER (Context
and Emotion embedded Framework for Emotion Recognition). Our four approach
facets are to extracting data by considering the entire sentence and each
individual word simultaneously, as well as implicit and explicit emotions. The
knowledge gained from these data not only mitigates the impact of flaws in the
preceding approaches but also it strengthens the feature vector. We evaluate
several feature spaces using BERT family and design the CEFER based on them.
CEFER combines the emotional vector of each word, including explicit and
implicit emotions, with the feature vector of each word based on context. CEFER
performs better than the BERT family. The experimental results demonstrate that
identifying implicit emotions are more challenging than detecting explicit
emotions. CEFER, improves the accuracy of implicit emotion recognition.
According to the results, CEFER perform 5% better than the BERT family in
recognizing explicit emotions and 3% in implicit.
Related papers
- Exploring speech style spaces with language models: Emotional TTS without emotion labels [8.288443063900825]
We propose a novel approach that leverages text awareness to acquire emotional styles without the need for explicit emotion labels or text prompts.
We present TEMOTTS, a two-stage framework for E-TTS that is trained without emotion labels and is capable of inference without auxiliary inputs.
arXiv Detail & Related papers (2024-05-18T23:21:39Z) - Emotion-Aware Prosodic Phrasing for Expressive Text-to-Speech [47.02518401347879]
We propose an emotion-aware prosodic phrasing model, termed textitEmoPP, to mine the emotional cues of utterance accurately and predict appropriate phrase breaks.
We first conduct objective observations on the ESD dataset to validate the strong correlation between emotion and prosodic phrasing.
Then achieves the objective and subjective evaluations show that the EmoPP outperforms all baselines and remarkable performance in terms of emotion expressiveness.
arXiv Detail & Related papers (2023-09-21T01:51:10Z) - Emotion and Sentiment Guided Paraphrasing [3.5027291542274366]
We introduce a new task of fine-grained emotional paraphrasing along emotion gradients.
We reconstruct several widely used paraphrasing datasets by augmenting the input and target texts with their fine-grained emotion labels.
We propose a framework for emotion and sentiment guided paraphrasing by leveraging pre-trained language models for conditioned text generation.
arXiv Detail & Related papers (2023-06-08T20:59:40Z) - Automatic Emotion Experiencer Recognition [12.447379545167642]
We show that experiencer detection in text is a challenging task, with a precision of.82 and a recall of.56 (F1 =.66)
We show that experiencer detection in text is a challenging task, with a precision of.82 and a recall of.56 (F1 =.66)
arXiv Detail & Related papers (2023-05-26T08:33:28Z) - Experiencer-Specific Emotion and Appraisal Prediction [13.324006587838523]
Emotion classification in NLP assigns emotions to texts, such as sentences or paragraphs.
We focus on the experiencers of events, and assign an emotion (if any holds) to each of them.
Our experiencer-aware models of emotions and appraisals outperform the experiencer-agnostic baselines.
arXiv Detail & Related papers (2022-10-21T16:04:27Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - PO-EMO: Conceptualization, Annotation, and Modeling of Aesthetic
Emotions in German and English Poetry [26.172030802168752]
We consider emotions in poetry as they are elicited in the reader, rather than what is expressed in the text or intended by the author.
We conceptualize a set of aesthetic emotions that are predictive of aesthetic appreciation in the reader, and allow the annotation of multiple labels per line to capture mixed emotions within their context.
arXiv Detail & Related papers (2020-03-17T13:54:48Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.