Bridging the gap between emotion and joint action
- URL: http://arxiv.org/abs/2108.06264v1
- Date: Fri, 13 Aug 2021 14:21:37 GMT
- Title: Bridging the gap between emotion and joint action
- Authors: M. M. N. Bie\'nkiewicz (1), A. Smykovskyi (1), T. Olugbade (2), S.
Janaqi (1), A. Camurri (3), N. Bianchi-Berthouze (2), M. Bj\"orkman (4), B.
G. Bardy (1) ((1) EuroMov Digital Health in Motion Univ. Montpellier IMT
Mines Ales France, (2) UCL, University College of London UK, (3) UNIGE
InfoMus Casa Paganini Italy, (4) KTH Royal Institute of Technology Sweden)
- Abstract summary: Joint action brings individuals (and embodiments of their emotions) together, in space and in time.
Yet little is known about how individual emotions propagate through embodied presence in a group, and how joint action changes individual emotion.
In this review, we first identify the gap and then stockpile evidence showing strong entanglement between emotion and acting together from various branches of sciences.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our daily human life is filled with a myriad of joint action moments, be it
children playing, adults working together (i.e., team sports), or strangers
navigating through a crowd. Joint action brings individuals (and embodiment of
their emotions) together, in space and in time. Yet little is known about how
individual emotions propagate through embodied presence in a group, and how
joint action changes individual emotion. In fact, the multi-agent component is
largely missing from neuroscience-based approaches to emotion, and reversely
joint action research has not found a way yet to include emotion as one of the
key parameters to model socio-motor interaction. In this review, we first
identify the gap and then stockpile evidence showing strong entanglement
between emotion and acting together from various branches of sciences. We
propose an integrative approach to bridge the gap, highlight five research
avenues to do so in behavioral neuroscience and digital sciences, and address
some of the key challenges in the area faced by modern societies.
Related papers
- Think out Loud: Emotion Deducing Explanation in Dialogues [57.90554323226896]
We propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN)
EDEN recognizes emotion and causes in an explicitly thinking way.
It can help Large Language Models (LLMs) achieve better recognition of emotions and causes.
arXiv Detail & Related papers (2024-06-07T08:58:29Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - Exploring Emotions in Multi-componential Space using Interactive VR Games [1.1510009152620668]
We operationalised a data-driven approach using interactive Virtual Reality (VR) games.
We used Machine Learning (ML) methods to identify the unique contributions of each component to emotion differentiation.
These findings also have implications for using VR environments in emotion research.
arXiv Detail & Related papers (2024-04-04T06:54:44Z) - Dynamic Causal Disentanglement Model for Dialogue Emotion Detection [77.96255121683011]
We propose a Dynamic Causal Disentanglement Model based on hidden variable separation.
This model effectively decomposes the content of dialogues and investigates the temporal accumulation of emotions.
Specifically, we propose a dynamic temporal disentanglement model to infer the propagation of utterances and hidden variables.
arXiv Detail & Related papers (2023-09-13T12:58:09Z) - UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion
Recognition [32.34485263348587]
Multimodal sentiment analysis (MSA) and emotion recognition in conversation (ERC) are key research topics for computers to understand human behaviors.
We propose a multimodal sentiment knowledge-sharing framework (UniMSE) that unifies MSA and ERC tasks from features, labels, and models.
We perform modality fusion at the syntactic and semantic levels and introduce contrastive learning between modalities and samples to better capture the difference and consistency between sentiments and emotions.
arXiv Detail & Related papers (2022-11-21T08:46:01Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - CogIntAc: Modeling the Relationships between Intention, Emotion and
Action in Interactive Process from Cognitive Perspective [15.797390372732973]
We propose a novel cognitive framework of individual interaction.
The core of the framework is that individuals achieve interaction through external action driven by their inner intention.
arXiv Detail & Related papers (2022-05-07T03:54:51Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - A Circular-Structured Representation for Visual Emotion Distribution
Learning [82.89776298753661]
We propose a well-grounded circular-structured representation to utilize the prior knowledge for visual emotion distribution learning.
To be specific, we first construct an Emotion Circle to unify any emotional state within it.
On the proposed Emotion Circle, each emotion distribution is represented with an emotion vector, which is defined with three attributes.
arXiv Detail & Related papers (2021-06-23T14:53:27Z) - A Multi-Componential Approach to Emotion Recognition and the Effect of
Personality [0.0]
This paper applies a componential framework with a data-driven approach to characterize emotional experiences evoked during movie watching.
The results suggest that differences between various emotions can be captured by a few (at least 6) latent dimensions.
Results show that a componential model with a limited number of descriptors is still able to predict the level of experienced discrete emotion.
arXiv Detail & Related papers (2020-10-22T01:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.