Using Emotion Embeddings to Transfer Knowledge Between Emotions,
Languages, and Annotation Formats
- URL: http://arxiv.org/abs/2211.00171v1
- Date: Mon, 31 Oct 2022 22:32:36 GMT
- Title: Using Emotion Embeddings to Transfer Knowledge Between Emotions,
Languages, and Annotation Formats
- Authors: Georgios Chochlakis (1 and 2), Gireesh Mahajan (3), Sabyasachee Baruah
(1 and 2), Keith Burghardt (2), Kristina Lerman (2), Shrikanth Narayanan (1
and 2) ((1) Signal Analysis and Interpretation Lab, University of Southern
California, (2) Information Science Institute, University of Southern
California, (3) Microsoft Cognitive Services)
- Abstract summary: We show how we can build a single model that can transition between different configurations.
We show that Demux can simultaneously transfer knowledge in a zero-shot manner to a new language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The need for emotional inference from text continues to diversify as more and
more disciplines integrate emotions into their theories and applications. These
needs include inferring different emotion types, handling multiple languages,
and different annotation formats. A shared model between different
configurations would enable the sharing of knowledge and a decrease in training
costs, and would simplify the process of deploying emotion recognition models
in novel environments. In this work, we study how we can build a single model
that can transition between these different configurations by leveraging
multilingual models and Demux, a transformer-based model whose input includes
the emotions of interest, enabling us to dynamically change the emotions
predicted by the model. Demux also produces emotion embeddings, and performing
operations on them allows us to transition to clusters of emotions by pooling
the embeddings of each cluster. We show that Demux can simultaneously transfer
knowledge in a zero-shot manner to a new language, to a novel annotation format
and to unseen emotions. Code is available at
https://github.com/gchochla/Demux-MEmo .
Related papers
- A Unified and Interpretable Emotion Representation and Expression Generation [38.321248253111776]
We propose an interpretable and unified emotion model, referred as C2A2.
We show that our generated images are rich and capture subtle expressions.
arXiv Detail & Related papers (2024-04-01T17:03:29Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Leveraging Label Correlations in a Multi-label Setting: A Case Study in
Emotion [0.0]
We exploit label correlations in multi-label emotion recognition models to improve emotion detection.
We demonstrate state-of-the-art performance across Spanish, English, and Arabic in SemEval 2018 Task 1 E-c using monolingual BERT-based models.
arXiv Detail & Related papers (2022-10-28T02:27:18Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - SpanEmo: Casting Multi-label Emotion Classification as Span-prediction [15.41237087996244]
We propose a new model "SpanEmo" casting multi-label emotion classification as span-prediction.
We introduce a loss function focused on modelling multiple co-existing emotions in the input sentence.
Experiments performed on the SemEval2018 multi-label emotion data over three language sets demonstrate our method's effectiveness.
arXiv Detail & Related papers (2021-01-25T12:11:04Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.