Context-Aware Personality Inference in Dyadic Scenarios: Introducing the
UDIVA Dataset
- URL: http://arxiv.org/abs/2012.14259v1
- Date: Mon, 28 Dec 2020 15:08:02 GMT
- Title: Context-Aware Personality Inference in Dyadic Scenarios: Introducing the
UDIVA Dataset
- Authors: Cristina Palmero, Javier Selva, Sorina Smeureanu, Julio C. S. Jacques
Junior, Albert Clap\'es, Alexa Mosegu\'i, Zejian Zhang, David Gallardo,
Georgina Guilera, David Leiva, Sergio Escalera
- Abstract summary: This paper introduces UDIVA, a new non-acted dataset of face-to-face dyadic interactions.
The dataset consists of 90.5 hours of socio-dyadic interactions among 147 participants distributed in 188 sessions.
It includes self-demographic, self- and peer-reported personality, internal state, and relationship profiling from participants.
- Score: 21.140721329446595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces UDIVA, a new non-acted dataset of face-to-face dyadic
interactions, where interlocutors perform competitive and collaborative tasks
with different behavior elicitation and cognitive workload. The dataset
consists of 90.5 hours of dyadic interactions among 147 participants
distributed in 188 sessions, recorded using multiple audiovisual and
physiological sensors. Currently, it includes sociodemographic, self- and
peer-reported personality, internal state, and relationship profiling from
participants. As an initial analysis on UDIVA, we propose a transformer-based
method for self-reported personality inference in dyadic scenarios, which uses
audiovisual data and different sources of context from both interlocutors to
regress a target person's personality traits. Preliminary results from an
incremental study show consistent improvements when using all available context
information.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - The MuSe 2024 Multimodal Sentiment Analysis Challenge: Social Perception and Humor Recognition [64.5207572897806]
The Multimodal Sentiment Analysis Challenge (MuSe) 2024 addresses two contemporary multimodal affect and sentiment analysis problems.
In the Social Perception Sub-Challenge (MuSe-Perception), participants will predict 16 different social attributes of individuals.
The Cross-Cultural Humor Detection Sub-Challenge (MuSe-Humor) dataset expands upon the Passau Spontaneous Football Coach Humor dataset.
arXiv Detail & Related papers (2024-06-11T22:26:20Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - MedNgage: A Dataset for Understanding Engagement in Patient-Nurse
Conversations [4.847266237348932]
Patients who effectively manage their symptoms often demonstrate higher levels of engagement in conversations and interventions with healthcare practitioners.
It is crucial for AI systems to understand the engagement in natural conversations between patients and practitioners to better contribute toward patient care.
We present a novel dataset (MedNgage) which consists of patient-nurse conversations about cancer symptom management.
arXiv Detail & Related papers (2023-05-31T16:06:07Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Learning Graph Representation of Person-specific Cognitive Processes
from Audio-visual Behaviours for Automatic Personality Recognition [17.428626029689653]
We propose to represent the target subjects person-specific cognition in the form of a person-specific CNN architecture.
Each person-specific CNN is explored by the Neural Architecture Search (NAS) and a novel adaptive loss function.
Experimental results show that the produced graph representations are well associated with target subjects' personality traits.
arXiv Detail & Related papers (2021-10-26T11:04:23Z) - Partner Matters! An Empirical Study on Fusing Personas for Personalized
Response Selection in Retrieval-Based Chatbots [51.091235903442715]
This paper makes an attempt to explore the impact of utilizing personas that describe either self or partner speakers on the task of response selection.
Four persona fusion strategies are designed, which assume personas interact with contexts or responses in different ways.
Empirical studies on the Persona-Chat dataset show that the partner personas can improve the accuracy of response selection.
arXiv Detail & Related papers (2021-05-19T10:32:30Z) - Modeling Dyadic Conversations for Personality Inference [8.19277339277905]
We propose a novel augmented Gated Recurrent Unit (GRU) model for learning unsupervised Personal Conversational Embeddings (PCE) based on dyadic conversations between individuals.
We conduct experiments on the Movie Script dataset, which is collected from conversations between characters in movie scripts.
arXiv Detail & Related papers (2020-09-26T01:25:42Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.