Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control
- URL: http://arxiv.org/abs/2212.01434v1
- Date: Fri, 2 Dec 2022 20:35:55 GMT
- Title: Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control
- Authors: Devesh K. Jha, Siddarth Jain, Diego Romeres, William Yerazunis and
Daniel Nikovski
- Abstract summary: We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
- Score: 17.270360447188196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots have been steadily increasing their presence in our daily lives, where
they can work along with humans to provide assistance in various tasks on
industry floors, in offices, and in homes. Automated assembly is one of the key
applications of robots, and the next generation assembly systems could become
much more efficient by creating collaborative human-robot systems. However,
although collaborative robots have been around for decades, their application
in truly collaborative systems has been limited. This is because a truly
collaborative human-robot system needs to adjust its operation with respect to
the uncertainty and imprecision in human actions, ensure safety during
interaction, etc. In this paper, we present a system for human-robot
collaborative assembly using learning from demonstration and pose estimation,
so that the robot can adapt to the uncertainty caused by the operation of
humans. Learning from demonstration is used to generate motion trajectories for
the robot based on the pose estimate of different goal locations from a deep
learning-based vision system. The proposed system is demonstrated using a
physical 6 DoF manipulator in a collaborative human-robot assembly scenario. We
show successful generalization of the system's operation to changes in the
initial and final goal locations through various experiments.
Related papers
- HARMONIC: Cognitive and Control Collaboration in Human-Robotic Teams [0.0]
We demonstrate a cognitive strategy for robots in human-robot teams that incorporates metacognition, natural language communication, and explainability.
The system is embodied using the HARMONIC architecture that flexibly integrates cognitive and control capabilities.
arXiv Detail & Related papers (2024-09-26T16:48:21Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction [9.806227900768926]
We propose to model social motion forecasting in a shared human-robot representation space.
ECHO operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios.
We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2024-02-07T11:37:14Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - CoGrasp: 6-DoF Grasp Generation for Human-Robot Collaboration [0.0]
We propose a novel, deep neural network-based method called CoGrasp that generates human-aware robot grasps.
In real robot experiments, our method achieves about 88% success rate in producing stable grasps.
Our approach enables a safe, natural, and socially-aware human-robot objects' co-grasping experience.
arXiv Detail & Related papers (2022-10-06T19:23:25Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.