Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves
- URL: http://arxiv.org/abs/2007.01921v2
- Date: Thu, 9 Jul 2020 02:40:57 GMT
- Title: Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves
- Authors: Ruisen Liu, Manisha Natarajan, and Matthew Gombolay
- Abstract summary: We introduce a novel resource coordination that enables robots to explore the relative strengths and learning abilities of their human teammates.
We generate and evaluate a robust schedule while discovering the latest individual worker proficiency.
Results indicate that scheduling strategies favoring exploration tend to be beneficial for human-robot collaboration.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As robots become ubiquitous in the workforce, it is essential that
human-robot collaboration be both intuitive and adaptive. A robot's quality
improves based on its ability to explicitly reason about the time-varying (i.e.
learning curves) and stochastic capabilities of its human counterparts, and
adjust the joint workload to improve efficiency while factoring human
preferences. We introduce a novel resource coordination algorithm that enables
robots to explore the relative strengths and learning abilities of their human
teammates, by constructing schedules that are robust to stochastic and
time-varying human task performance. We first validate our algorithmic approach
using data we collected from a user study (n = 20), showing we can quickly
generate and evaluate a robust schedule while discovering the latest individual
worker proficiency. Second, we conduct a between-subjects experiment (n = 90)
to validate the efficacy of our coordinating algorithm. Results from the
human-subjects experiment indicate that scheduling strategies favoring
exploration tend to be beneficial for human-robot collaboration as it improves
team fluency (p = 0.0438), while also maximizing team efficiency (p < 0.001).
Related papers
- Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Coordination with Humans via Strategy Matching [5.072077366588174]
We present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task.
By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge.
Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners.
arXiv Detail & Related papers (2022-10-27T01:00:50Z) - Intuitive and Efficient Human-robot Collaboration via Real-time
Approximate Bayesian Inference [4.310882094628194]
Collaborative robots and end-to-end AI, promises flexible automation of human tasks in factories and warehouses.
Humans and cobots will collaborate helping each other.
For these collaborations to be effective and safe, robots need to model, predict and exploit human's intents.
arXiv Detail & Related papers (2022-05-17T23:04:44Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Simultaneous Learning from Human Pose and Object Cues for Real-Time
Activity Recognition [11.290467061493189]
We propose a novel approach to real-time human activity recognition, through simultaneously learning from observations of both human poses and objects involved in the human activity.
Our method outperforms previous methods and obtains real-time performance for human activity recognition with a processing speed of 104 Hz.
arXiv Detail & Related papers (2020-03-26T22:04:37Z) - Human-robot co-manipulation of extended objects: Data-driven models and
control from analysis of human-human dyads [2.7036498789349244]
We use data from human-human dyad experiments to determine motion intent which we use for a physical human-robot co-manipulation task.
We develop a deep neural network based on motion data from human-human trials to predict human intent based on past motion.
arXiv Detail & Related papers (2020-01-03T21:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.