Task Allocation with Load Management in Multi-Agent Teams
- URL: http://arxiv.org/abs/2207.08279v1
- Date: Sun, 17 Jul 2022 20:17:09 GMT
- Title: Task Allocation with Load Management in Multi-Agent Teams
- Authors: Haochen Wu, Amin Ghadami, Alparslan Emrah Bayrak, Jonathon M. Smereka,
and Bogdan I. Epureanu
- Abstract summary: We present a decision-making framework for multi-agent teams to learn task allocation with the consideration of load management.
We illustrate the effect of load management on team performance and explore agent behaviors in example scenarios.
A measure of agent importance in collaboration is developed to infer team resilience when facing potential overload situations.
- Score: 4.844411739015927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In operations of multi-agent teams ranging from homogeneous robot swarms to
heterogeneous human-autonomy teams, unexpected events might occur. While
efficiency of operation for multi-agent task allocation problems is the primary
objective, it is essential that the decision-making framework is intelligent
enough to manage unexpected task load with limited resources. Otherwise,
operation effectiveness would drastically plummet with overloaded agents facing
unforeseen risks. In this work, we present a decision-making framework for
multi-agent teams to learn task allocation with the consideration of load
management through decentralized reinforcement learning, where idling is
encouraged and unnecessary resource usage is avoided. We illustrate the effect
of load management on team performance and explore agent behaviors in example
scenarios. Furthermore, a measure of agent importance in collaboration is
developed to infer team resilience when facing handling potential overload
situations.
Related papers
- Agent-Oriented Planning in Multi-Agent Systems [54.429028104022066]
We propose a novel framework for agent-oriented planning in multi-agent systems, leveraging a fast task decomposition and allocation process.
We integrate a feedback loop into the proposed framework to further enhance the effectiveness and robustness of such a problem-solving process.
arXiv Detail & Related papers (2024-10-03T04:07:51Z) - Joint Intrinsic Motivation for Coordinated Exploration in Multi-Agent
Deep Reinforcement Learning [0.0]
We propose an approach for rewarding strategies where agents collectively exhibit novel behaviors.
Jim rewards joint trajectories based on a centralized measure of novelty designed to function in continuous environments.
Results show that joint exploration is crucial for solving tasks where the optimal strategy requires a high level of coordination.
arXiv Detail & Related papers (2024-02-06T13:02:00Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent
Reinforcement Learning [122.47938710284784]
We propose a novel framework for learning dynamic subtask assignment (LDSA) in cooperative MARL.
To reasonably assign agents to different subtasks, we propose an ability-based subtask selection strategy.
We show that LDSA learns reasonable and effective subtask assignment for better collaboration.
arXiv Detail & Related papers (2022-05-05T10:46:16Z) - Hierarchically Structured Scheduling and Execution of Tasks in a
Multi-Agent Environment [1.0660480034605238]
In a warehouse environment, tasks appear dynamically. Consequently, a task management system that matches them with the workforce too early is necessarily sub-optimal.
We propose to use deep reinforcement learning to solve both the high-level scheduling problem and the low-level multi-agent problem of schedule execution.
arXiv Detail & Related papers (2022-03-06T18:11:34Z) - DSDF: An approach to handle stochastic agents in collaborative
multi-agent reinforcement learning [0.0]
We show how thisity of agents, which could be a result of malfunction or aging of robots, can add to the uncertainty in coordination.
Our solution, DSDF which tunes the discounted factor for the agents according to uncertainty and use the values to update the utility networks of individual agents.
arXiv Detail & Related papers (2021-09-14T12:02:28Z) - Polynomial-Time Algorithms for Multi-Agent Minimal-Capacity Planning [19.614913673879474]
We study the problem of minimizing the resource capacity of autonomous agents cooperating to achieve a shared task.
In a consumption Markov decision process, the agent has a resource of limited capacity.
We develop an algorithm that solves this graph problem in time that is emphpolynomial in the number of agents, target locations, and size of the consumption Markov decision process.
arXiv Detail & Related papers (2021-05-04T00:30:02Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.