Automated Task-Time Interventions to Improve Teamwork using Imitation
Learning
- URL: http://arxiv.org/abs/2303.00413v2
- Date: Thu, 2 Mar 2023 20:26:20 GMT
- Title: Automated Task-Time Interventions to Improve Teamwork using Imitation
Learning
- Authors: Sangwon Seo, Bing Han and Vaibhav Unhelkar
- Abstract summary: We present TIC: an automated intervention approach for improving coordination between team members.
We first learn a generative model of team behavior from past task execution data.
Next, it utilizes the learned generative model and team's task objective (shared reward) to algorithmically generate execution-time interventions.
- Score: 5.423490734916741
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective human-human and human-autonomy teamwork is critical but often
challenging to perfect. The challenge is particularly relevant in time-critical
domains, such as healthcare and disaster response, where the time pressures can
make coordination increasingly difficult to achieve and the consequences of
imperfect coordination can be severe. To improve teamwork in these and other
domains, we present TIC: an automated intervention approach for improving
coordination between team members. Using BTIL, a multi-agent imitation learning
algorithm, our approach first learns a generative model of team behavior from
past task execution data. Next, it utilizes the learned generative model and
team's task objective (shared reward) to algorithmically generate
execution-time interventions. We evaluate our approach in synthetic multi-agent
teaming scenarios, where team members make decentralized decisions without full
observability of the environment. The experiments demonstrate that the
automated interventions can successfully improve team performance and shed
light on the design of autonomous agents for improving teamwork.
Related papers
- Large Language Models for Orchestrating Bimanual Robots [19.60907949776435]
Large Language Models (LLMs) have taken control of a variety of robotic tasks.
However, coordination in continuous space is a particular challenge for bimanual tasks.
We present LAnguage-model-based Bimanual ORchestration (LABOR) to analyze task configurations and devise coordination control policies.
arXiv Detail & Related papers (2024-04-02T15:08:35Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching [70.28786574064694]
A reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions.
The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams.
arXiv Detail & Related papers (2023-04-08T14:32:12Z) - AdverSAR: Adversarial Search and Rescue via Multi-Agent Reinforcement
Learning [4.843554492319537]
We propose an algorithm that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications.
It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time.
The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments.
arXiv Detail & Related papers (2022-12-20T08:13:29Z) - Coordination with Humans via Strategy Matching [5.072077366588174]
We present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task.
By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge.
Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners.
arXiv Detail & Related papers (2022-10-27T01:00:50Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Competing Adaptive Networks [56.56653763124104]
We develop an algorithm for decentralized competition among teams of adaptive agents.
We present an application in the decentralized training of generative adversarial neural networks.
arXiv Detail & Related papers (2021-03-29T14:42:15Z) - Human-Robot Team Coordination with Dynamic and Latent Human Task
Proficiencies: Scheduling with Learning Curves [0.0]
We introduce a novel resource coordination that enables robots to explore the relative strengths and learning abilities of their human teammates.
We generate and evaluate a robust schedule while discovering the latest individual worker proficiency.
Results indicate that scheduling strategies favoring exploration tend to be beneficial for human-robot collaboration.
arXiv Detail & Related papers (2020-07-03T19:44:22Z) - Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning [11.480994804659908]
We build on graph neural networks to learn agent models and joint-action value models under varying team compositions.
We empirically demonstrate that our approach successfully models the effects other agents have on the learner, leading to policies that robustly adapt to dynamic team compositions.
arXiv Detail & Related papers (2020-06-18T10:39:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.