A Probabilistic Model Of Interaction Dynamics for Dyadic Face-to-Face
Settings
- URL: http://arxiv.org/abs/2207.04566v1
- Date: Sun, 10 Jul 2022 23:31:27 GMT
- Title: A Probabilistic Model Of Interaction Dynamics for Dyadic Face-to-Face
Settings
- Authors: Renke Wang and Ifeoma Nwogu
- Abstract summary: We develop a probabilistic model to capture the interaction dynamics between pairs of participants in a face-to-face setting.
This interaction encoding is then used to influence the generation when predicting one agent's future dynamics.
We show that our model successfully delineates between the modes, based on their interacting dynamics.
- Score: 1.9544213396776275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural conversations between humans often involve a large number of
non-verbal nuanced expressions, displayed at key times throughout the
conversation. Understanding and being able to model these complex interactions
is essential for creating realistic human-agent communication, whether in the
virtual or physical world. As social robots and intelligent avatars emerge in
popularity and utility, being able to realistically model and generate these
dynamic expressions throughout conversations is critical. We develop a
probabilistic model to capture the interaction dynamics between pairs of
participants in a face-to-face setting, allowing for the encoding of
synchronous expressions between the interlocutors. This interaction encoding is
then used to influence the generation when predicting one agent's future
dynamics, conditioned on the other's current dynamics. FLAME features are
extracted from videos containing natural conversations between subjects to
train our interaction model. We successfully assess the efficacy of our
proposed model via quantitative metrics and qualitative metrics, and show that
it successfully captures the dynamics of a pair of interacting dyads. We also
test the model with a never-before-seen parent-infant dataset comprising of two
different modes of communication between the dyads, and show that our model
successfully delineates between the modes, based on their interacting dynamics.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Dyadic Interaction Modeling for Social Behavior Generation [6.626277726145613]
We present an effective framework for creating 3D facial motions in dyadic interactions.
The heart of our framework is Dyadic Interaction Modeling (DIM), a pre-training approach.
Experiments demonstrate the superiority of our framework in generating listener motions.
arXiv Detail & Related papers (2024-03-14T03:21:33Z) - From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations [107.88375243135579]
Given speech audio, we output multiple possibilities of gestural motion for an individual, including face, body, and hands.
We visualize the generated motion using highly photorealistic avatars that can express crucial nuances in gestures.
Experiments show our model generates appropriate and diverse gestures, outperforming both diffusion- and VQ-only methods.
arXiv Detail & Related papers (2024-01-03T18:55:16Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with
Cross-person Memory Transformer [38.92436852096451]
Cross-person memory Transformer (CPM-T) framework is able to explicitly model affective dynamics.
CPM-T framework maintains memory modules to store and update the contexts within the conversation window.
We evaluate the effectiveness and generalizability of our approach on three publicly available datasets for joint engagement, rapport, and human beliefs prediction tasks.
arXiv Detail & Related papers (2023-05-21T06:43:35Z) - Improving a sequence-to-sequence nlp model using a reinforcement
learning policy algorithm [0.0]
Current neural network models of dialogue generation show great promise for generating answers for chatty agents.
But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes.
This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues.
arXiv Detail & Related papers (2022-12-28T22:46:57Z) - Learning Interacting Dynamical Systems with Latent Gaussian Process ODEs [13.436770170612295]
We study for the first time uncertainty-aware modeling of continuous-time dynamics of interacting objects.
Our model infers both independent dynamics and their interactions with reliable uncertainty estimates.
arXiv Detail & Related papers (2022-05-24T08:36:25Z) - Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion [89.01668641930206]
We present a framework for modeling interactional communication in dyadic conversations.
We autoregressively output multiple possibilities of corresponding listener motion.
Our method organically captures the multimodal and non-deterministic nature of nonverbal dyadic interactions.
arXiv Detail & Related papers (2022-04-18T17:58:04Z) - VIRT: Improving Representation-based Models for Text Matching through
Virtual Interaction [50.986371459817256]
We propose a novel textitVirtual InteRacTion mechanism, termed as VIRT, to enable full and deep interaction modeling in representation-based models.
VIRT asks representation-based encoders to conduct virtual interactions to mimic the behaviors as interaction-based models do.
arXiv Detail & Related papers (2021-12-08T09:49:28Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Interactions in information spread: quantification and interpretation
using stochastic block models [3.5450828190071655]
In social networks, users' behavior results from the people they interact with, news in their feed, or trending topics.
Here, we propose a new model, the Interactive Mixed Membership Block Model (IMMSBM), which investigates the role of interactions between entities.
In inference tasks, taking them into account leads to average relative changes with respect to non-interactive models of up to 150% in the probability of an outcome.
arXiv Detail & Related papers (2020-04-09T14:22:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.