Forming Human-Robot Cooperation for Tasks with General Goal using
Evolutionary Value Learning
- URL: http://arxiv.org/abs/2012.10773v3
- Date: Tue, 30 Mar 2021 16:47:28 GMT
- Title: Forming Human-Robot Cooperation for Tasks with General Goal using
Evolutionary Value Learning
- Authors: Lingfeng Tao, Michael Bowman, Jiucai Zhang, Xiaoli Zhang
- Abstract summary: In Human-Robot Cooperation (HRC), the robot cooperates with humans to accomplish the task together.
Existing approaches assume the human has a specific goal during the cooperation, and the robot infers and acts toward it.
We present the Evolutionary Value Learning (EVL) approach to model the dynamics of the goal specification process in HRC.
- Score: 9.053709318841232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Human-Robot Cooperation (HRC), the robot cooperates with humans to
accomplish the task together. Existing approaches assume the human has a
specific goal during the cooperation, and the robot infers and acts toward it.
However, in real-world environments, a human usually only has a general goal
(e.g., general direction or area in motion planning) at the beginning of the
cooperation, which needs to be clarified to a specific goal (e.g., an exact
position) during cooperation. The specification process is interactive and
dynamic, which depends on the environment and the partners' behavior. The robot
that does not consider the goal specification process may cause frustration to
the human partner, elongate the time to come to an agreement, and compromise or
fail team performance. We present the Evolutionary Value Learning (EVL)
approach, which uses a State-based Multivariate Bayesian Inference method to
model the dynamics of the goal specification process in HRC. EVL can actively
enhance the process of goal specification and cooperation formation. This
enables the robot to simultaneously help the human specify the goal and learn a
cooperative policy in a Deep Reinforcement Learning (DRL) manner. In a dynamic
ball balancing task with real human subjects, the robot equipped with EVL
outperforms existing methods with faster goal specification processes and
better team performance.
Related papers
- COHERENT: Collaboration of Heterogeneous Multi-Robot System with Large Language Models [49.24666980374751]
COHERENT is a novel LLM-based task planning framework for collaboration of heterogeneous multi-robot systems.
A Proposal-Execution-Feedback-Adjustment mechanism is designed to decompose and assign actions for individual robots.
The experimental results show that our work surpasses the previous methods by a large margin in terms of success rate and execution efficiency.
arXiv Detail & Related papers (2024-09-23T15:53:41Z) - LIT: Large Language Model Driven Intention Tracking for Proactive Human-Robot Collaboration -- A Robot Sous-Chef Application [4.519544934630495]
Large Language Models (LLM) and Vision Language Models (VLM) enable robots to ground natural language prompts into control actions.
We propose Language-driven Intention Tracking (LIT) to model the human user's long-term behavior and to predict the next human intention to guide the robot for proactive collaboration.
arXiv Detail & Related papers (2024-06-19T19:18:40Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - ThinkBot: Embodied Instruction Following with Thought Chain Reasoning [66.09880459084901]
Embodied Instruction Following (EIF) requires agents to complete human instruction by interacting objects in complicated surrounding environments.
We propose ThinkBot that reasons the thought chain in human instruction to recover the missing action descriptions.
Our ThinkBot outperforms the state-of-the-art EIF methods by a sizable margin in both success rate and execution efficiency.
arXiv Detail & Related papers (2023-12-12T08:30:09Z) - Proactive Human-Robot Interaction using Visuo-Lingual Transformers [0.0]
Humans possess the innate ability to extract latent visuo-lingual cues to infer context through human interaction.
We propose a learning-based method that uses visual cues from the scene, lingual commands from a user and knowledge of prior object-object interaction to identify and proactively predict the underlying goal the user intends to achieve.
arXiv Detail & Related papers (2023-10-04T00:50:21Z) - Coordination with Humans via Strategy Matching [5.072077366588174]
We present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task.
By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge.
Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners.
arXiv Detail & Related papers (2022-10-27T01:00:50Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Deployment and Evaluation of a Flexible Human-Robot Collaboration Model
Based on AND/OR Graphs in a Manufacturing Environment [2.3848738964230023]
A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms.
A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated.
The results of this study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace.
arXiv Detail & Related papers (2020-07-13T22:05:34Z) - Learn Task First or Learn Human Partner First: A Hierarchical Task
Decomposition Method for Human-Robot Cooperation [11.387868752604986]
This work proposes a novel task decomposition method with a hierarchical reward mechanism that enables the robot to learn the hierarchical dynamic control task separately from learning the human partner's behavior.
The results show that the robot should learn the task first to achieve higher team performance and learn the human first to achieve higher learning efficiency.
arXiv Detail & Related papers (2020-03-01T04:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.