Safe Multimodal Communication in Human-Robot Collaboration
- URL: http://arxiv.org/abs/2308.03690v1
- Date: Mon, 7 Aug 2023 16:08:21 GMT
- Title: Safe Multimodal Communication in Human-Robot Collaboration
- Authors: Davide Ferrari, Andrea Pupa, Alberto Signoretti, Cristian Secchi
- Abstract summary: We propose a framework that enables multi-channel communication between humans and robots by leveraging multimodal fusion of voice and gesture commands.
The framework is validated through a comparative experiment, demonstrating that, thanks to multimodal communication, the robot can extract valuable information for performing the required task.
- Score: 12.688356318251763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The new industrial settings are characterized by the presence of human and
robots that work in close proximity, cooperating in performing the required
job. Such a collaboration, however, requires to pay attention to many aspects.
Firstly, it is crucial to enable a communication between this two actors that
is natural and efficient. Secondly, the robot behavior must always be compliant
with the safety regulations, ensuring always a safe collaboration. In this
paper, we propose a framework that enables multi-channel communication between
humans and robots by leveraging multimodal fusion of voice and gesture commands
while always respecting safety regulations. The framework is validated through
a comparative experiment, demonstrating that, thanks to multimodal
communication, the robot can extract valuable information for performing the
required task and additionally, with the safety layer, the robot can scale its
speed to ensure the operator's safety.
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It also allows the human operator to adjust the control ratio to achieve a trade-off between manual and automated control.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs [12.787160626087744]
We propose a novel integration of Large Language Models with Embodied Robotic Control Prompts (ERCPs) and Embodied Knowledge Graphs (EKGs)
ERCPs are designed as predefined instructions that ensure LLMs generate safe and precise responses.
EKGs provide a comprehensive knowledge base ensuring that the actions of the robot are continuously aligned with safety protocols.
arXiv Detail & Related papers (2024-05-28T05:50:25Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Proactive Human-Robot Co-Assembly: Leveraging Human Intention Prediction
and Robust Safe Control [10.973115127845224]
This paper presents an integrated framework for proactive human-robot collaboration.
A robust intention prediction module is learned to guide the robot for efficient collaboration.
The developed framework is applied to a co-assembly task using a Kinova Gen3 robot.
arXiv Detail & Related papers (2023-06-20T19:42:30Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - SERA: Safe and Efficient Reactive Obstacle Avoidance for Collaborative
Robotic Planning in Unstructured Environments [1.5229257192293197]
We propose a novel methodology for reactive whole-body obstacle avoidance.
Our approach allows a robotic arm to proactively avoid obstacles of arbitrary 3D shapes without direct contact.
Our methodology provides a robust and effective solution for safe human-robot collaboration in non-stationary environments.
arXiv Detail & Related papers (2022-03-24T21:11:43Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Learning Multi-Arm Manipulation Through Collaborative Teleoperation [63.35924708783826]
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
arXiv Detail & Related papers (2020-12-12T05:43:43Z) - With Whom to Communicate: Learning Efficient Communication for
Multi-Robot Collision Avoidance [17.18628401523662]
This paper presents an efficient communication method that solves the problem of "when" and with "whom" to communicate in multi-robot collision avoidance scenarios.
In this approach, every robot learns to reason about other robots' states and considers the risk of future collisions before asking for the trajectory plans of other robots.
arXiv Detail & Related papers (2020-09-25T09:49:22Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.