"Are you okay, honey?": Recognizing Emotions among Couples Managing
Diabetes in Daily Life using Multimodal Real-World Smartwatch Data
- URL: http://arxiv.org/abs/2208.08909v2
- Date: Mon, 22 Aug 2022 22:36:57 GMT
- Title: "Are you okay, honey?": Recognizing Emotions among Couples Managing
Diabetes in Daily Life using Multimodal Real-World Smartwatch Data
- Authors: George Boateng, Xiangyu Zhao, Malgorzata Speichert, Elgar Fleisch,
Janina L\"uscher, Theresa Pauly, Urte Scholz, Guy Bodenmann, Tobias Kowatsch
- Abstract summary: Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners.
Recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management.
We extracted physiological, movement, acoustic, and linguistic features, and trained machine learning models to recognize each partner's self-reported emotions.
- Score: 8.355190969810305
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Couples generally manage chronic diseases together and the management takes
an emotional toll on both patients and their romantic partners. Consequently,
recognizing the emotions of each partner in daily life could provide an insight
into their emotional well-being in chronic disease management. Currently, the
process of assessing each partner's emotions is manual, time-intensive, and
costly. Despite the existence of works on emotion recognition among couples,
none of these works have used data collected from couples' interactions in
daily life. In this work, we collected 85 hours (1,021 5-minute samples) of
real-world multimodal smartwatch sensor data (speech, heart rate,
accelerometer, and gyroscope) and self-reported emotion data (n=612) from 26
partners (13 couples) managing diabetes mellitus type 2 in daily life. We
extracted physiological, movement, acoustic, and linguistic features, and
trained machine learning models (support vector machine and random forest) to
recognize each partner's self-reported emotions (valence and arousal). Our
results from the best models (balanced accuracies of 63.8% and 78.1% for
arousal and valence respectively) are better than chance and our prior work
that also used data from German-speaking, Swiss-based couples, albeit, in the
lab. This work contributes toward building automated emotion recognition
systems that would eventually enable partners to monitor their emotions in
daily life and enable the delivery of interventions to improve their emotional
well-being.
Related papers
- SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - Personality-affected Emotion Generation in Dialog Systems [67.40609683389947]
We propose a new task, Personality-affected Emotion Generation, to generate emotion based on the personality given to the dialog system.
We analyze the challenges in this task, i.e., (1) heterogeneously integrating personality and emotional factors and (2) extracting multi-granularity emotional information in the dialog context.
Results suggest that by adopting our method, the emotion generation performance is improved by 13% in macro-F1 and 5% in weighted-F1 from the BERT-base model.
arXiv Detail & Related papers (2024-04-03T08:48:50Z) - WEARS: Wearable Emotion AI with Real-time Sensor data [0.8740570557632509]
We propose a system to predict user emotion using smartwatch sensors.
We design a framework to collect ground truth in real-time utilizing a mix of English and regional language-based videos.
We also did an ablation study to understand the impact of features including Heart Rate, Accelerometer, and Gyroscope sensor data on mood.
arXiv Detail & Related papers (2023-08-22T11:03:00Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Multimodal Emotion Recognition among Couples from Lab Settings to Daily
Life using Smartwatches [2.4366811507669124]
recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management.
Currently, there exists no comprehensive overview of works on emotion recognition among couples.
This thesis contributes toward building automated emotion recognition systems that would eventually enable partners to monitor their emotions in daily life.
arXiv Detail & Related papers (2022-12-21T16:41:11Z) - Face Emotion Recognization Using Dataset Augmentation Based on Neural
Network [0.0]
Facial expression is one of the most external indications of a person's feelings and emotions.
It plays an important role in coordinating interpersonal relationships.
As a branch of the field of analyzing sentiment, facial expression recognition offers broad application prospects.
arXiv Detail & Related papers (2022-10-23T10:21:45Z) - The MuSe 2022 Multimodal Sentiment Analysis Challenge: Humor, Emotional
Reactions, and Stress [71.06453250061489]
The Multimodal Sentiment Analysis Challenge (MuSe) 2022 is dedicated to multimodal sentiment and emotion recognition.
For this year's challenge, we feature three datasets: (i) the Passau Spontaneous Football Coach Humor dataset that contains audio-visual recordings of German football coaches, labelled for the presence of humour; (ii) the Hume-Reaction dataset in which reactions of individuals to emotional stimuli have been annotated with respect to seven emotional expression intensities; and (iii) the Ulm-Trier Social Stress Test dataset comprising of audio-visual data labelled with continuous emotion values of people in stressful dispositions.
arXiv Detail & Related papers (2022-06-23T13:34:33Z) - Development, Deployment, and Evaluation of DyMand -- An Open-Source
Smartwatch and Smartphone System for Capturing Couples' Dyadic Interactions
in Chronic Disease Management in Daily Life [4.269935075264936]
DyMand is a novel open-source smartwatch and smartphone system for collecting data from couples based on partners' interaction moments.
Our algorithm uses the Bluetooth signal strength between two smartwatches each worn by one partner, and a voice activity detection machine-learning algorithm to infer that the partners are interacting.
Our system triggered 99.1% of the expected number of sensor and self-report data when the app was running, and 77.6% of algorithm-triggered recordings contained partners' conversation moments.
arXiv Detail & Related papers (2022-05-16T13:37:42Z) - "You made me feel this way": Investigating Partners' Influence in
Predicting Emotions in Couples' Conflict Interactions using Speech Data [3.618388731766687]
How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction.
In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples.
Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction.
arXiv Detail & Related papers (2021-06-03T01:15:41Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.