Investigating Role of Big Five Personality Traits in Audio-Visual Rapport Estimation
- URL: http://arxiv.org/abs/2410.11861v1
- Date: Mon, 07 Oct 2024 08:52:33 GMT
- Title: Investigating Role of Big Five Personality Traits in Audio-Visual Rapport Estimation
- Authors: Takato Hayashi, Ryusei Kimura, Ryo Ishii, Shogo Okada,
- Abstract summary: We investigate whether the estimation performance of rapport can be improved by using the participant's personality traits as the model's input.
Our experimental results show that adding Big Five features (BFFs) to nonverbal features can improve the estimation performance of self-reported rapport.
Our study is the first step toward understanding why personality-aware estimation models of interpersonal perception accomplish high estimation performance.
- Score: 2.6163803834101054
- License:
- Abstract: Automatic rapport estimation in social interactions is a central component of affective computing. Recent reports have shown that the estimation performance of rapport in initial interactions can be improved by using the participant's personality traits as the model's input. In this study, we investigate whether this findings applies to interactions between friends by developing rapport estimation models that utilize nonverbal cues (audio and facial expressions) as inputs. Our experimental results show that adding Big Five features (BFFs) to nonverbal features can improve the estimation performance of self-reported rapport in dyadic interactions between friends. Next, we demystify how BFFs improve the estimation performance of rapport through a comparative analysis between models with and without BFFs. We decompose rapport ratings into perceiver effects (people's tendency to rate other people), target effects (people's tendency to be rated by other people), and relationship effects (people's unique ratings for a specific person) using the social relations model. We then analyze the extent to which BFFs contribute to capturing each effect. Our analysis demonstrates that the perceiver's and the target's BFFs lead estimation models to capture the perceiver and the target effects, respectively. Furthermore, our experimental results indicate that the combinations of facial expression features and BFFs achieve best estimation performances not only in estimating rapport ratings, but also in estimating three effects. Our study is the first step toward understanding why personality-aware estimation models of interpersonal perception accomplish high estimation performance.
Related papers
- Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals [91.59906995214209]
We propose a new evaluation method, Counterfactual Attentiveness Test (CAT)
CAT uses counterfactuals by replacing part of the input with its counterpart from a different example, expecting an attentive model to change its prediction.
We show that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves.
arXiv Detail & Related papers (2023-11-16T06:27:35Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with
Cross-person Memory Transformer [38.92436852096451]
Cross-person memory Transformer (CPM-T) framework is able to explicitly model affective dynamics.
CPM-T framework maintains memory modules to store and update the contexts within the conversation window.
We evaluate the effectiveness and generalizability of our approach on three publicly available datasets for joint engagement, rapport, and human beliefs prediction tasks.
arXiv Detail & Related papers (2023-05-21T06:43:35Z) - Understanding Programmatic Weak Supervision via Source-aware Influence
Function [76.74549130841383]
Programmatic Weak Supervision (PWS) aggregates the source votes of multiple weak supervision sources into probabilistic training labels.
We build on Influence Function (IF) to decompose the end model's training objective and then calculate the influence associated with each (data, source, class)
These primitive influence score can then be used to estimate the influence of individual component PWS, such as source vote, supervision source, and training data.
arXiv Detail & Related papers (2022-05-25T15:57:24Z) - Metaversal Learning Environments: Measuring, predicting and improving
interpersonal effectiveness [2.6424064030995957]
We introduce a novel architecture that combines Artificial intelligence and Virtual Reality to create a highly immersive learning experience using avatars.
The framework allows us to measure the interpersonal effectiveness of an individual interacting with the avatar.
Results reveal that individuals with deficits in their interpersonal effectiveness show a significant improvement in performance after multiple interactions with an avatar.
arXiv Detail & Related papers (2022-05-05T18:22:27Z) - Estimating Social Influence from Observational Data [5.156484100374057]
We consider the problem of estimating social influence, the effect that a person's behavior has on the future behavior of their peers.
Key challenge is that shared behavior between friends could be equally explained by influence or by two other confounding factors.
This paper addresses the challenges of estimating social influence with three contributions.
arXiv Detail & Related papers (2022-03-24T20:21:24Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Expertise and confidence explain how social influence evolves along
intellective tasks [10.525352489242396]
We study interpersonal influence in small groups of individuals who collectively execute a sequence of intellective tasks.
We report empirical evidence on theories of transactive memory systems, social comparison, and confidences on the origins of social influence.
We propose a cognitive dynamical model inspired by these theories to describe the process by which individuals adjust interpersonal influences over time.
arXiv Detail & Related papers (2020-11-13T23:48:25Z) - A Multi-term and Multi-task Analyzing Framework for Affective Analysis
in-the-wild [0.2216657815393579]
We introduce the affective recognition method that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2020 Contest.
Since affective behaviors have many observable features that have their own time frames, we introduced multiple optimized time windows.
We generated affective recognition models for each time window and ensembled these models together.
arXiv Detail & Related papers (2020-09-29T09:24:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.