"You made me feel this way": Investigating Partners' Influence in
Predicting Emotions in Couples' Conflict Interactions using Speech Data
- URL: http://arxiv.org/abs/2106.01526v1
- Date: Thu, 3 Jun 2021 01:15:41 GMT
- Title: "You made me feel this way": Investigating Partners' Influence in
Predicting Emotions in Couples' Conflict Interactions using Speech Data
- Authors: George Boateng, Peter Hilpert, Guy Bodenmann, Mona Neysari, Tobias
Kowatsch
- Abstract summary: How romantic partners interact with each other during a conflict influences how they feel at the end of the interaction.
In this work, we used BERT to extract linguistic features (i.e., what partners said) and openSMILE to extract paralinguistic features (i.e., how they said it) from a data set of 368 German-speaking Swiss couples.
Based on those features, we trained machine learning models to predict if partners feel positive or negative after the conflict interaction.
- Score: 3.618388731766687
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: How romantic partners interact with each other during a conflict influences
how they feel at the end of the interaction and is predictive of whether the
partners stay together in the long term. Hence understanding the emotions of
each partner is important. Yet current approaches that are used include
self-reports which are burdensome and hence limit the frequency of this data
collection. Automatic emotion prediction could address this challenge. Insights
from psychology research indicate that partners' behaviors influence each
other's emotions in conflict interaction and hence, the behavior of both
partners could be considered to better predict each partner's emotion. However,
it is yet to be investigated how doing so compares to only using each partner's
own behavior in terms of emotion prediction performance. In this work, we used
BERT to extract linguistic features (i.e., what partners said) and openSMILE to
extract paralinguistic features (i.e., how they said it) from a data set of 368
German-speaking Swiss couples (N = 736 individuals) which were videotaped
during an 8-minutes conflict interaction in the laboratory. Based on those
features, we trained machine learning models to predict if partners feel
positive or negative after the conflict interaction. Our results show that
including the behavior of the other partner improves the prediction
performance. Furthermore, for men, considering how their female partners spoke
is most important and for women considering what their male partner said is
most important in getting better prediction performance. This work is a step
towards automatically recognizing each partners' emotion based on the behavior
of both, which would enable a better understanding of couples in research,
therapy, and the real world.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Do Large Language Models Understand Verbal Indicators of Romantic Attraction? [0.0]
We show that Large Language Models (LLMs) can detect romantic attraction during brief getting-to-know-you interactions.
We show that ChatGPT (and Claude 3) can predict both objective and subjective indicators of speed dating success.
arXiv Detail & Related papers (2024-06-23T17:50:30Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - "Are you okay, honey?": Recognizing Emotions among Couples Managing
Diabetes in Daily Life using Multimodal Real-World Smartwatch Data [8.355190969810305]
Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners.
Recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management.
We extracted physiological, movement, acoustic, and linguistic features, and trained machine learning models to recognize each partner's self-reported emotions.
arXiv Detail & Related papers (2022-08-16T22:04:12Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - CogIntAc: Modeling the Relationships between Intention, Emotion and
Action in Interactive Process from Cognitive Perspective [15.797390372732973]
We propose a novel cognitive framework of individual interaction.
The core of the framework is that individuals achieve interaction through external action driven by their inner intention.
arXiv Detail & Related papers (2022-05-07T03:54:51Z) - BERT meets LIWC: Exploring State-of-the-Art Language Models for
Predicting Communication Behavior in Couples' Conflict Interactions [3.0309575462589122]
We train machine learning models to automatically predict communication codes of 368 German-speaking Swiss couples.
Results suggest it might be time to consider modern alternatives to LIWC, the de facto linguistic features in psychology.
arXiv Detail & Related papers (2021-06-03T01:37:59Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - "where is this relationship going?": Understanding Relationship
Trajectories in Narrative Text [28.14874371042193]
Given a narrative describing a social interaction, systems make inferences about the underlying relationship trajectory.
We construct a new dataset, Social Narrative Tree, which consists of 1250 stories documenting a variety of daily social interactions.
arXiv Detail & Related papers (2020-10-29T02:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.