K-EmoCon, a multimodal sensor dataset for continuous emotion recognition
in naturalistic conversations
- URL: http://arxiv.org/abs/2005.04120v2
- Date: Tue, 19 May 2020 08:25:29 GMT
- Title: K-EmoCon, a multimodal sensor dataset for continuous emotion recognition
in naturalistic conversations
- Authors: Cheul Young Park, Narae Cha, Soowon Kang, Auk Kim, Ahsan Habib
Khandoker, Leontios Hadjileontiadis, Alice Oh, Yong Jeong, Uichin Lee
- Abstract summary: K-EmoCon is a novel dataset with comprehensive annotations of continuous emotions during naturalistic conversations.
The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals.
It includes emotion annotations from all three available perspectives: self, debate partner, and external observers.
- Score: 19.350031493515562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing emotions during social interactions has many potential
applications with the popularization of low-cost mobile sensors, but a
challenge remains with the lack of naturalistic affective interaction data.
Most existing emotion datasets do not support studying idiosyncratic emotions
arising in the wild as they were collected in constrained environments.
Therefore, studying emotions in the context of social interactions requires a
novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive
annotations of continuous emotions during naturalistic conversations. The
dataset contains multimodal measurements, including audiovisual recordings,
EEG, and peripheral physiological signals, acquired with off-the-shelf devices
from 16 sessions of approximately 10-minute long paired debates on a social
issue. Distinct from previous datasets, it includes emotion annotations from
all three available perspectives: self, debate partner, and external observers.
Raters annotated emotional displays at intervals of every 5 seconds while
viewing the debate footage, in terms of arousal-valence and 18 additional
categorical emotions. The resulting K-EmoCon is the first publicly available
emotion dataset accommodating the multiperspective assessment of emotions
during social interactions.
Related papers
- SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - WEARS: Wearable Emotion AI with Real-time Sensor data [0.8740570557632509]
We propose a system to predict user emotion using smartwatch sensors.
We design a framework to collect ground truth in real-time utilizing a mix of English and regional language-based videos.
We also did an ablation study to understand the impact of features including Heart Rate, Accelerometer, and Gyroscope sensor data on mood.
arXiv Detail & Related papers (2023-08-22T11:03:00Z) - EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes [53.95428298229396]
We introduce EmoSet, the first large-scale visual emotion dataset annotated with rich attributes.
EmoSet comprises 3.3 million images in total, with 118,102 of these images carefully labeled by human annotators.
Motivated by psychological studies, in addition to emotion category, each image is also annotated with a set of describable emotion attributes.
arXiv Detail & Related papers (2023-07-16T06:42:46Z) - The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional
Reactions, and Stress [71.06453250061489]
The Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition.
For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor dataset that contains audio-visual recordings of German football coaches, labelled for the presence of humour; (ii) the Hume-Reaction dataset in which reactions of individuals to emotional stimuli have been annotated with respect to seven emotional expression intensities; and (iii) the Ulm-Trier Social Stress Test dataset comprising of audio-visual data labelled with continuous emotion values of people in stressful dispositions.
arXiv Detail & Related papers (2022-06-23T13:34:33Z) - EmoInHindi: A Multi-label Emotion and Intensity Annotated Dataset in
Hindi for Emotion Recognition in Dialogues [44.79509115642278]
We create a large conversational dataset in Hindi named EmoInHindi for multi-label emotion and intensity recognition in conversations.
We prepare our dataset in a Wizard-of-Oz manner for mental health and legal counselling of crime victims.
arXiv Detail & Related papers (2022-05-27T11:23:50Z) - Infusing Multi-Source Knowledge with Heterogeneous Graph Neural Network
for Emotional Conversation Generation [25.808037796936766]
In a real-world conversation, we instinctively perceive emotions from multi-source information.
We propose a heterogeneous graph-based model for emotional conversation generation.
Experimental results show that our model can effectively perceive emotions from multi-source knowledge.
arXiv Detail & Related papers (2020-12-09T06:09:31Z) - Temporal aggregation of audio-visual modalities for emotion recognition [0.5352699766206808]
We propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality.
Our proposed method outperforms other methods from the literature and human accuracy rating.
arXiv Detail & Related papers (2020-07-08T18:44:15Z) - Context Based Emotion Recognition using EMOTIC Dataset [22.631542327834595]
We present EMOTIC, a dataset of images of people annotated with their apparent emotion.
Using the EMOTIC dataset we train different CNN models for emotion recognition.
Our results show how scene context provides important information to automatically recognize emotional states.
arXiv Detail & Related papers (2020-03-30T12:38:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.