SpanEmo: Casting Multi-label Emotion Classification as Span-prediction
- URL: http://arxiv.org/abs/2101.10038v1
- Date: Mon, 25 Jan 2021 12:11:04 GMT
- Title: SpanEmo: Casting Multi-label Emotion Classification as Span-prediction
- Authors: Hassan Alhuzali, Sophia Ananiadou
- Abstract summary: We propose a new model "SpanEmo" casting multi-label emotion classification as span-prediction.
We introduce a loss function focused on modelling multiple co-existing emotions in the input sentence.
Experiments performed on the SemEval2018 multi-label emotion data over three language sets demonstrate our method's effectiveness.
- Score: 15.41237087996244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion recognition (ER) is an important task in Natural Language Processing
(NLP), due to its high impact in real-world applications from health and
well-being to author profiling, consumer analysis and security. Current
approaches to ER, mainly classify emotions independently without considering
that emotions can co-exist. Such approaches overlook potential ambiguities, in
which multiple emotions overlap. We propose a new model "SpanEmo" casting
multi-label emotion classification as span-prediction, which can aid ER models
to learn associations between labels and words in a sentence. Furthermore, we
introduce a loss function focused on modelling multiple co-existing emotions in
the input sentence. Experiments performed on the SemEval2018 multi-label
emotion data over three language sets (i.e., English, Arabic and Spanish)
demonstrate our method's effectiveness. Finally, we present different analyses
that illustrate the benefits of our method in terms of improving the model
performance and learning meaningful associations between emotion classes and
words in the sentence.
Related papers
- Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
Transfer for Speech Synthesis [13.918119853846838]
Affect is an emotional characteristic encompassing valence, arousal, and intensity, and is a crucial attribute for enabling authentic conversations.
We propose AffectEcho, an emotion translation model, that uses a Vector Quantized codebook to model emotions within a quantized space.
We demonstrate the effectiveness of our approach in controlling the emotions of generated speech while preserving identity, style, and emotional cadence unique to each speaker.
arXiv Detail & Related papers (2023-08-16T06:28:29Z) - Leveraging Label Correlations in a Multi-label Setting: A Case Study in
Emotion [0.0]
We exploit label correlations in multi-label emotion recognition models to improve emotion detection.
We demonstrate state-of-the-art performance across Spanish, English, and Arabic in SemEval 2018 Task 1 E-c using monolingual BERT-based models.
arXiv Detail & Related papers (2022-10-28T02:27:18Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - The Emotion is Not One-hot Encoding: Learning with Grayscale Label for
Emotion Recognition in Conversation [0.0]
In emotion recognition in conversation (ERC), the emotion of the current utterance is predicted by considering the previous context.
We introduce several methods for constructing grayscale labels and confirm that each method improves the emotion recognition performance.
arXiv Detail & Related papers (2022-06-15T08:14:42Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Multi-Classifier Interactive Learning for Ambiguous Speech Emotion
Recognition [9.856709988128515]
We propose a novel multi-classifier interactive learning (MCIL) method to address the ambiguous speech emotions.
MCIL mimics several individuals, who have inconsistent cognitions of ambiguous emotions, and construct new ambiguous labels.
Experiments show that MCIL does not only improve each classifier's performance, but also raises their recognition consistency from moderate to substantial.
arXiv Detail & Related papers (2020-12-10T02:58:34Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.