Anticipation through Head Pose Estimation: a preliminary study
- URL: http://arxiv.org/abs/2408.05516v1
- Date: Sat, 10 Aug 2024 10:58:33 GMT
- Title: Anticipation through Head Pose Estimation: a preliminary study
- Authors: Federico Figari Tomenotti, Nicoletta Noceti,
- Abstract summary: We discuss a preliminary experiment on the use of head pose as a visual cue to understand and anticipate action goals.
We will show that short-range anticipation is possible, laying the foundations for future applications to human-robot interaction.
- Score: 0.2209921757303168
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The ability to anticipate others' goals and intentions is at the basis of human-human social interaction. Such ability, largely based on non-verbal communication, is also a key to having natural and pleasant interactions with artificial agents, like robots. In this work, we discuss a preliminary experiment on the use of head pose as a visual cue to understand and anticipate action goals, particularly reaching and transporting movements. By reasoning on the spatio-temporal connections between the head, hands and objects in the scene, we will show that short-range anticipation is possible, laying the foundations for future applications to human-robot interaction.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - HandMeThat: Human-Robot Communication in Physical and Social
Environments [73.91355172754717]
HandMeThat is a benchmark for a holistic evaluation of instruction understanding and following in physical and social environments.
HandMeThat contains 10,000 episodes of human-robot interactions.
We show that both offline and online reinforcement learning algorithms perform poorly on HandMeThat.
arXiv Detail & Related papers (2023-10-05T16:14:46Z) - Proactive Human-Robot Interaction using Visuo-Lingual Transformers [0.0]
Humans possess the innate ability to extract latent visuo-lingual cues to infer context through human interaction.
We propose a learning-based method that uses visual cues from the scene, lingual commands from a user and knowledge of prior object-object interaction to identify and proactively predict the underlying goal the user intends to achieve.
arXiv Detail & Related papers (2023-10-04T00:50:21Z) - Gaze-based intention estimation: principles, methodologies, and
applications in HRI [0.0]
This review aims to draw a line between insights in the psychological literature on visuomotor control and relevant applications of gaze-based intention recognition.
The use of eye tracking and gaze-based models for intent recognition in Human-Robot Interaction is considered.
arXiv Detail & Related papers (2023-02-09T09:44:13Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - GIMO: Gaze-Informed Human Motion Prediction in Context [75.52839760700833]
We propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, and ego-centric views with eye gaze.
Our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects.
To realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches.
arXiv Detail & Related papers (2022-04-20T13:17:39Z) - Forecasting Nonverbal Social Signals during Dyadic Interactions with
Generative Adversarial Neural Networks [0.0]
Successful social interaction is closely coupled with the interplay between nonverbal perception and action mechanisms.
Nonverbal gestures are expected to endow social robots with the capability of emphasizing their speech, or showing their intentions.
Our research sheds a light on modeling human behaviors in social interactions, specifically, forecasting human nonverbal social signals during dyadic interactions.
arXiv Detail & Related papers (2021-10-18T15:01:32Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Towards hybrid primary intersubjectivity: a neural robotics library for
human science [4.232614032390374]
We study primary intersubjectivity as a second person perspective experience characterized by predictive engagement.
We propose an open-source methodology named textitneural robotics library (NRL) for experimental human-robot interaction.
We discuss some ways human-robot (hybrid) intersubjectivity can contribute to human science research.
arXiv Detail & Related papers (2020-06-29T11:35:46Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.