Fine-Grained Emotion Prediction by Modeling Emotion Definitions
- URL: http://arxiv.org/abs/2107.12135v1
- Date: Mon, 26 Jul 2021 12:11:18 GMT
- Title: Fine-Grained Emotion Prediction by Modeling Emotion Definitions
- Authors: Gargi Singh and Dhanajit Brahma and Piyush Rai and Ashutosh Modi
- Abstract summary: We propose a new framework for fine-grained emotion prediction in the text through emotion definition modeling.
Our models outperform existing state-of-the-art for fine-grained emotion dataset GoEmotions.
- Score: 26.098917459551167
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose a new framework for fine-grained emotion prediction
in the text through emotion definition modeling. Our approach involves a
multi-task learning framework that models definitions of emotions as an
auxiliary task while being trained on the primary task of emotion prediction.
We model definitions using masked language modeling and class definition
prediction tasks. Our models outperform existing state-of-the-art for
fine-grained emotion dataset GoEmotions. We further show that this trained
model can be used for transfer learning on other benchmark datasets in emotion
prediction with varying emotion label sets, domains, and sizes. The proposed
models outperform the baselines on transfer learning experiments demonstrating
the generalization capability of the models.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Emotion Detection in Reddit: Comparative Study of Machine Learning and Deep Learning Techniques [0.0]
This study concentrates on text-based emotion detection by leveraging the GoEmotions dataset.
We employed a range of models for this task, including six machine learning models, three ensemble models, and a Long Short-Term Memory (LSTM) model.
Results indicate that the Stacking classifier outperforms other models in accuracy and performance.
arXiv Detail & Related papers (2024-11-15T16:28:25Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Using Emotion Embeddings to Transfer Knowledge Between Emotions,
Languages, and Annotation Formats [0.0]
We show how we can build a single model that can transition between different configurations.
We show that Demux can simultaneously transfer knowledge in a zero-shot manner to a new language.
arXiv Detail & Related papers (2022-10-31T22:32:36Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.