Multitask Learning for Emotion and Personality Detection
- URL: http://arxiv.org/abs/2101.02346v1
- Date: Thu, 7 Jan 2021 03:09:55 GMT
- Title: Multitask Learning for Emotion and Personality Detection
- Authors: Yang Li, Amirmohammad Kazameini, Yash Mehta, Erik Cambria
- Abstract summary: We build on the known correlation between personality traits and emotional behaviors, and propose a novel multitask learning framework, SoGMTL.
Our more computationally efficient CNN-based multitask model achieves the state-of-the-art performance across multiple famous personality and emotion datasets.
- Score: 17.029426018676997
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In recent years, deep learning-based automated personality trait detection
has received a lot of attention, especially now, due to the massive digital
footprints of an individual. Moreover, many researchers have demonstrated that
there is a strong link between personality traits and emotions. In this paper,
we build on the known correlation between personality traits and emotional
behaviors, and propose a novel multitask learning framework, SoGMTL that
simultaneously predicts both of them. We also empirically evaluate and discuss
different information-sharing mechanisms between the two tasks. To ensure the
high quality of the learning process, we adopt a MAML-like framework for model
optimization. Our more computationally efficient CNN-based multitask model
achieves the state-of-the-art performance across multiple famous personality
and emotion datasets, even outperforming Language Model based models.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - SEGAA: A Unified Approach to Predicting Age, Gender, and Emotion in
Speech [0.0]
This study ventures into predicting age, gender, and emotion from vocal cues, a field with vast applications.
Exploring deep learning models for these predictions involves comparing single, multi-output, and sequential models highlighted in this paper.
The experiments suggest that Multi-output models perform comparably to individual models, efficiently capturing the intricate relationships between variables and speech inputs, all while achieving improved runtime.
arXiv Detail & Related papers (2024-03-01T11:28:37Z) - MMToM-QA: Multimodal Theory of Mind Question Answering [80.87550820953236]
Theory of Mind (ToM) is an essential ingredient for developing machines with human-level social intelligence.
Recent machine learning models, particularly large language models, seem to show some aspects of ToM understanding.
Human ToM, on the other hand, is more than video or text understanding.
People can flexibly reason about another person's mind based on conceptual representations extracted from any available data.
arXiv Detail & Related papers (2024-01-16T18:59:24Z) - A Multi-Task, Multi-Modal Approach for Predicting Categorical and
Dimensional Emotions [0.0]
We propose a multi-task, multi-modal system that predicts categorical and dimensional emotions.
Results emphasise the importance of cross-regularisation between the two types of emotions.
arXiv Detail & Related papers (2023-12-31T16:48:03Z) - Personality-aware Human-centric Multimodal Reasoning: A New Task,
Dataset and Baselines [32.82738983843281]
We introduce a new task called Personality-aware Human-centric Multimodal Reasoning (PHMR) (T1)
The goal of the task is to forecast the future behavior of a particular individual using multimodal information from past instances, while integrating personality factors.
The experimental results demonstrate that incorporating personality traits enhances human-centric multimodal reasoning performance.
arXiv Detail & Related papers (2023-04-05T09:09:10Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - M2Lens: Visualizing and Explaining Multimodal Models for Sentiment
Analysis [28.958168542624062]
We present an interactive visual analytics system, M2Lens, to visualize and explain multimodal models for sentiment analysis.
M2Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels.
arXiv Detail & Related papers (2021-07-17T15:54:27Z) - Two-Faced Humans on Twitter and Facebook: Harvesting Social Multimedia
for Human Personality Profiling [74.83957286553924]
We infer the Myers-Briggs Personality Type indicators by applying a novel multi-view fusion framework, called "PERS"
Our experimental results demonstrate the PERS's ability to learn from multi-view data for personality profiling by efficiently leveraging on the significantly different data arriving from diverse social multimedia sources.
arXiv Detail & Related papers (2021-06-20T10:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.