MUSER: MUltimodal Stress Detection using Emotion Recognition as an
Auxiliary Task
- URL: http://arxiv.org/abs/2105.08146v1
- Date: Mon, 17 May 2021 20:24:46 GMT
- Title: MUSER: MUltimodal Stress Detection using Emotion Recognition as an
Auxiliary Task
- Authors: Yiqun Yao, Michalis Papakostas, Mihai Burzo, Mohamed Abouelenien, Rada
Mihalcea
- Abstract summary: Stress and emotion are both human affective states, and stress has proven to have important implications on the regulation and expression of emotion.
In this work, we investigate the value of emotion recognition as an auxiliary task to improve stress detection.
We propose M -- a transformer-based model architecture and a novel multi-task learning algorithm with speed-based dynamic sampling strategy.
- Score: 22.80682208862559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The capability to automatically detect human stress can benefit artificial
intelligent agents involved in affective computing and human-computer
interaction. Stress and emotion are both human affective states, and stress has
proven to have important implications on the regulation and expression of
emotion. Although a series of methods have been established for multimodal
stress detection, limited steps have been taken to explore the underlying
inter-dependence between stress and emotion. In this work, we investigate the
value of emotion recognition as an auxiliary task to improve stress detection.
We propose MUSER -- a transformer-based model architecture and a novel
multi-task learning algorithm with speed-based dynamic sampling strategy.
Evaluations on the Multimodal Stressed Emotion (MuSE) dataset show that our
model is effective for stress detection with both internal and external
auxiliary tasks, and achieves state-of-the-art results.
Related papers
- StressPrompt: Does Stress Impact Large Language Models and Human Performance Similarly? [7.573284169975824]
This study explores whether Large Language Models (LLMs) exhibit stress responses similar to those of humans.
We developed a novel set of prompts, termed StressPrompt, designed to induce varying levels of stress.
The findings suggest that LLMs, like humans, perform optimally under moderate stress, consistent with the Yerkes-Dodson law.
arXiv Detail & Related papers (2024-09-14T08:32:31Z) - Self-supervised Gait-based Emotion Representation Learning from Selective Strongly Augmented Skeleton Sequences [4.740624855896404]
We propose a contrastive learning framework utilizing selective strong augmentation for self-supervised gait-based emotion representation.
Our approach is validated on the Emotion-Gait (E-Gait) and Emilya datasets and outperforms the state-of-the-art methods under different evaluation protocols.
arXiv Detail & Related papers (2024-05-08T09:13:10Z) - Dynamic Modality and View Selection for Multimodal Emotion Recognition with Missing Modalities [46.543216927386005]
Multiple channels, such as speech (voice) and facial expressions (image) are crucial in understanding human emotions.
One significant hurdle is how AI models manage the absence of a particular modality.
This study's central focus is assessing the performance and resilience of two strategies when confronted with the lack of one modality.
arXiv Detail & Related papers (2024-04-18T15:18:14Z) - Functional Graph Contrastive Learning of Hyperscanning EEG Reveals
Emotional Contagion Evoked by Stereotype-Based Stressors [1.8925617030516924]
This study focuses on the context of stereotype-based stress (SBS) during collaborative problem-solving tasks among female pairs.
Through an exploration of emotional contagion, this study seeks to unveil its underlying mechanisms and effects.
arXiv Detail & Related papers (2023-08-22T09:04:14Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Employing Multimodal Machine Learning for Stress Detection [8.430502131775722]
Mental wellness is one of the most neglected but crucial aspects of today's world.
In this work, a multimodal AI-based framework is proposed to monitor a person's working behavior and stress levels.
arXiv Detail & Related papers (2023-06-15T14:34:16Z) - Multimodal Feature Extraction and Fusion for Emotional Reaction
Intensity Estimation and Expression Classification in Videos with
Transformers [47.16005553291036]
We present our solutions to the two sub-challenges of Affective Behavior Analysis in the wild (ABAW) 2023.
For the Expression Classification Challenge, we propose a streamlined approach that handles the challenges of classification effectively.
By studying, analyzing, and combining these features, we significantly enhance the model's accuracy for sentiment prediction in a multimodal context.
arXiv Detail & Related papers (2023-03-16T09:03:17Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.