Gaze-based intention estimation: principles, methodologies, and
applications in HRI
- URL: http://arxiv.org/abs/2302.04530v1
- Date: Thu, 9 Feb 2023 09:44:13 GMT
- Title: Gaze-based intention estimation: principles, methodologies, and
applications in HRI
- Authors: Anna Belardinelli
- Abstract summary: This review aims to draw a line between insights in the psychological literature on visuomotor control and relevant applications of gaze-based intention recognition.
The use of eye tracking and gaze-based models for intent recognition in Human-Robot Interaction is considered.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intention prediction has become a relevant field of research in Human-Machine
and Human-Robot Interaction. Indeed, any artificial system (co)-operating with
and along humans, designed to assist and coordinate its actions with a human
partner, would benefit from first inferring the human's current intention. To
spare the user the cognitive burden of explicitly uttering their goals, this
inference relies mostly on behavioral cues deemed indicative of the current
action. It has been long known that eye movements are highly anticipatory of
the single steps unfolding during a task, hence they can serve as a very early
and reliable behavioural cue for intention recognition. This review aims to
draw a line between insights in the psychological literature on visuomotor
control and relevant applications of gaze-based intention recognition in
technical domains, with a focus on teleoperated and assistive robotic systems.
Starting from the cognitive principles underlying the relationship between
intentions, eye movements, and action, the use of eye tracking and gaze-based
models for intent recognition in Human-Robot Interaction is considered, with
prevalent methodologies and their diverse applications. Finally, special
consideration is given to relevant human factors issues and current limitations
to be factored in when designing such systems.
Related papers
- Anticipation through Head Pose Estimation: a preliminary study [0.2209921757303168]
We discuss a preliminary experiment on the use of head pose as a visual cue to understand and anticipate action goals.
We will show that short-range anticipation is possible, laying the foundations for future applications to human-robot interaction.
arXiv Detail & Related papers (2024-08-10T10:58:33Z) - CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - Human Goal Recognition as Bayesian Inference: Investigating the Impact
of Actions, Timing, and Goal Solvability [7.044125601403849]
We use a Bayesian framework to explore the role of actions, timing, and goal solvability in goal recognition.
Our work provides new insight into human goal recognition and takes a step towards more human-like AI models.
arXiv Detail & Related papers (2024-02-16T08:55:23Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Perception for Humanoid Robots [10.560498559084449]
This review summarizes the recent developments and trends in the field of perception in humanoid robots.
Three main areas of application are identified, namely, internal state estimation, external environment estimation, and human robot interaction.
arXiv Detail & Related papers (2023-09-27T12:32:11Z) - Towards Objective Evaluation of Socially-Situated Conversational Robots:
Assessing Human-Likeness through Multimodal User Behaviors [26.003947740875482]
This paper focuses on assessing the human-likeness of the robot as the primary evaluation metric.
Our approach aims to evaluate the robot's human-likeness based on observable user behaviors indirectly, thus enhancing objectivity and objectivity.
arXiv Detail & Related papers (2023-08-21T20:21:07Z) - Didn't see that coming: a survey on non-verbal social human behavior
forecasting [47.99589136455976]
Non-verbal social human behavior forecasting has increasingly attracted the interest of the research community in recent years.
Its direct applications to human-robot interaction and socially-aware human motion generation make it a very attractive field.
We define the behavior forecasting problem for multiple interactive agents in a generic way that aims at unifying the fields of social signals prediction and human motion forecasting.
arXiv Detail & Related papers (2022-03-04T18:25:30Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.