Forecasting Nonverbal Social Signals during Dyadic Interactions with
Generative Adversarial Neural Networks
- URL: http://arxiv.org/abs/2110.09378v1
- Date: Mon, 18 Oct 2021 15:01:32 GMT
- Title: Forecasting Nonverbal Social Signals during Dyadic Interactions with
Generative Adversarial Neural Networks
- Authors: Nguyen Tan Viet Tuyen, Oya Celiktutan
- Abstract summary: Successful social interaction is closely coupled with the interplay between nonverbal perception and action mechanisms.
Nonverbal gestures are expected to endow social robots with the capability of emphasizing their speech, or showing their intentions.
Our research sheds a light on modeling human behaviors in social interactions, specifically, forecasting human nonverbal social signals during dyadic interactions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We are approaching a future where social robots will progressively become
widespread in many aspects of our daily lives, including education, healthcare,
work, and personal use. All of such practical applications require that humans
and robots collaborate in human environments, where social interaction is
unavoidable. Along with verbal communication, successful social interaction is
closely coupled with the interplay between nonverbal perception and action
mechanisms, such as observation of gaze behaviour and following their
attention, coordinating the form and function of hand gestures. Humans perform
nonverbal communication in an instinctive and adaptive manner, with no effort.
For robots to be successful in our social landscape, they should therefore
engage in social interactions in a humanlike way, with increasing levels of
autonomy. In particular, nonverbal gestures are expected to endow social robots
with the capability of emphasizing their speech, or showing their intentions.
Motivated by this, our research sheds a light on modeling human behaviors in
social interactions, specifically, forecasting human nonverbal social signals
during dyadic interactions, with an overarching goal of developing robotic
interfaces that can learn to imitate human dyadic interactions. Such an
approach will ensure the messages encoded in the robot gestures could be
perceived by interacting partners in a facile and transparent manner, which
could help improve the interacting partner perception and makes the social
interaction outcomes enhanced.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Socially Integrated Navigation: A Social Acting Robot with Deep Reinforcement Learning [0.7864304771129751]
Mobile robots are being used on a large scale in various crowded situations and become part of our society.
Socially acceptable navigation behavior of a mobile robot with individual human consideration is an essential requirement for scalable applications and human acceptance.
We propose a novel socially integrated navigation approach where the robot's social behavior is adaptive and emerges from the interaction with humans.
arXiv Detail & Related papers (2024-03-14T18:25:40Z) - Developing Social Robots with Empathetic Non-Verbal Cues Using Large
Language Models [2.5489046505746704]
We design and label four types of empathetic non-verbal cues, abbreviated as SAFE: Speech, Action (gesture), Facial expression, and Emotion, in a social robot.
Preliminary results show distinct patterns in the robot's responses, such as a preference for calm and positive social emotions like 'joy' and 'lively', and frequent nodding gestures.
Our work lays the groundwork for future studies on human-robot interactions, emphasizing the essential role of both verbal and non-verbal cues in creating social and empathetic robots.
arXiv Detail & Related papers (2023-08-31T08:20:04Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - A MultiModal Social Robot Toward Personalized Emotion Interaction [1.2183405753834562]
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy.
The goal is to apply this framework in social scenarios that can let the robots generate a more natural and engaging HRI framework.
arXiv Detail & Related papers (2021-10-08T00:35:44Z) - Disambiguating Affective Stimulus Associations for Robot Perception and
Dialogue [67.89143112645556]
We provide a NICO robot with the ability to learn the associations between a perceived auditory stimulus and an emotional expression.
NICO is able to do this for both individual subjects and specific stimuli, with the aid of an emotion-driven dialogue system.
The robot is then able to use this information to determine a subject's enjoyment of perceived auditory stimuli in a real HRI scenario.
arXiv Detail & Related papers (2021-03-05T20:55:48Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z) - From Learning to Relearning: A Framework for Diminishing Bias in Social
Robot Navigation [3.3511723893430476]
We argue that social navigation models can replicate, promote, and amplify societal unfairness such as discrimination and segregation.
Our proposed framework consists of two components: textitlearning which incorporates social context into the learning process to account for safety and comfort, and textitrelearning to detect and correct potentially harmful outcomes before the onset.
arXiv Detail & Related papers (2021-01-07T17:42:35Z) - Affect-Driven Modelling of Robot Personality for Collaborative
Human-Robot Interactions [16.40684407420441]
Collaborative interactions require social robots to adapt to the dynamics of human affective behaviour.
We propose a novel framework for personality-driven behaviour generation in social robots.
arXiv Detail & Related papers (2020-10-14T16:34:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.