Gaze Detection and Analysis for Initiating Joint Activity in Industrial
Human-Robot Collaboration
- URL: http://arxiv.org/abs/2312.06643v3
- Date: Thu, 1 Feb 2024 18:05:36 GMT
- Title: Gaze Detection and Analysis for Initiating Joint Activity in Industrial
Human-Robot Collaboration
- Authors: Pooja Prajod, Matteo Lavit Nicora, Marta Mondellini, Giovanni Tauro,
Rocco Vertechy, Matteo Malosio, Elisabeth Andr\'e
- Abstract summary: A potential approach to improve the collaboration experience involves adapting cobot behavior based on natural cues from the operator.
Inspired by the literature on human-human interactions, we conducted a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a trigger for initiating joint activities.
- Score: 3.775062086401102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative robots (cobots) are widely used in industrial applications, yet
extensive research is still needed to enhance human-robot collaborations and
operator experience. A potential approach to improve the collaboration
experience involves adapting cobot behavior based on natural cues from the
operator. Inspired by the literature on human-human interactions, we conducted
a wizard-of-oz study to examine whether a gaze towards the cobot can serve as a
trigger for initiating joint activities in collaborative sessions. In this
study, 37 participants engaged in an assembly task while their gaze behavior
was analyzed. We employ a gaze-based attention recognition model to identify
when the participants look at the cobot. Our results indicate that in most
cases (84.88\%), the joint activity is preceded by a gaze towards the cobot.
Furthermore, during the entire assembly cycle, the participants tend to look at
the cobot around the time of the joint activity. To the best of our knowledge,
this is the first study to analyze the natural gaze behavior of participants
working on a joint activity with a robot during a collaborative assembly task.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Enabling Multi-Robot Collaboration from Single-Human Guidance [5.016558275355615]
We propose an efficient way of learning collaborative behaviors in multi-agent systems by leveraging expertise from only a single human.
We show that agents can effectively learn to collaborate by allowing a human operator to dynamically switch between controlling agents for a short period.
Our experiments showed that our method improves the success rate of a challenging collaborative hide-and-seek task by up to 58$% with only 40 minutes of human guidance.
arXiv Detail & Related papers (2024-09-30T00:02:56Z) - Understanding Entrainment in Human Groups: Optimising Human-Robot
Collaboration from Lessons Learned during Human-Human Collaboration [7.670608800568494]
Successful entrainment during collaboration positively affects trust, willingness to collaborate, and likeability towards collaborators.
This paper contributes to the Human-Computer/Robot Interaction (HCI/HRI) using a human-centred approach to identify characteristics of entrainment during pair- and group-based collaboration.
arXiv Detail & Related papers (2024-02-23T16:42:17Z) - Gaze-based Attention Recognition for Human-Robot Collaboration [0.0]
We present an assembly scenario where a human operator and a cobot collaborate equally to piece together a gearbox.
As a first step, we recognize the areas in the workspace that the human operator is paying attention to.
We propose a novel deep-learning approach to develop an attention recognition model.
arXiv Detail & Related papers (2023-03-30T11:55:38Z) - The role of haptic communication in dyadic collaborative object
manipulation tasks [6.46682752231823]
We investigate the role of haptics in human collaborative physical tasks.
We present a task to balance a ball at a target position on a board.
We find that humans can better coordinate with one another when haptic feedback is available.
arXiv Detail & Related papers (2022-03-02T18:13:54Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - A proxemics game between festival visitors and an industrial robot [1.2599533416395767]
Nonverbal behaviours of collaboration partners in human-robot teams influence the experience of the human interaction partners.
During the Ars Electronica 2020 Festival for Art, Technology and Society (Linz, Austria), we invited visitors to interact with an industrial robot.
We investigated general nonverbal behaviours of the humans interacting with the robot, as well as nonverbal behaviours of people in the audience.
arXiv Detail & Related papers (2021-05-28T13:26:00Z) - Joint Attention for Multi-Agent Coordination and Social Learning [108.31232213078597]
We show that joint attention can be useful as a mechanism for improving multi-agent coordination and social learning.
Joint attention leads to higher performance than a competitive centralized critic baseline across multiple environments.
Taken together, these findings suggest that joint attention may be a useful inductive bias for multi-agent learning.
arXiv Detail & Related papers (2021-04-15T20:14:19Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Detecting Human-Object Interactions with Action Co-occurrence Priors [108.31956827512376]
A common problem in human-object interaction (HOI) detection task is that numerous HOI classes have only a small number of labeled examples.
We observe that there exist natural correlations and anti-correlations among human-object interactions.
We present techniques to learn these priors and leverage them for more effective training, especially in rare classes.
arXiv Detail & Related papers (2020-07-17T02:47:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.