Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams
- URL: http://arxiv.org/abs/2007.03102v1
- Date: Mon, 6 Jul 2020 22:35:56 GMT
- Title: Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams
- Authors: Ankur Deka and Katia Sycara
- Abstract summary: We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi agent strategies in mixed cooperative-competitive environments can be
hard to craft by hand because each agent needs to coordinate with its teammates
while competing with its opponents. Learning based algorithms are appealing but
many scenarios require heterogeneous agent behavior for the team's success and
this increases the complexity of the learning algorithm. In this work, we
develop a competitive multi agent environment called FortAttack in which two
teams compete against each other. We corroborate that modeling agents with
Graph Neural Networks and training them with Reinforcement Learning leads to
the evolution of increasingly complex strategies for each team. We observe a
natural emergence of heterogeneous behavior amongst homogeneous agents when
such behavior can lead to the team's success. Such heterogeneous behavior from
homogeneous agents is appealing because any agent can replace the role of
another agent at test time. Finally, we propose ensemble training, in which we
utilize the evolved opponent strategies to train a single policy for friendly
agents.
Related papers
- ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Emergent collective intelligence from massive-agent cooperation and
competition [19.75488604218965]
We study the emergence of artificial collective intelligence through massive-agent reinforcement learning.
We propose a new massive-agent reinforcement learning environment, Lux, where dynamic and massive agents in two teams scramble for limited resources and fight off the darkness.
arXiv Detail & Related papers (2023-01-04T13:23:12Z) - Exploring the Benefits of Teams in Multiagent Learning [5.334505575267924]
We propose a new model of multiagent teams for reinforcement learning (RL) agents inspired by organizational psychology (OP)
We find that agents divided into teams develop cooperative pro-social policies despite incentives to not cooperate.
Agents are better able to coordinate and learn emergent roles within their teams and achieve higher rewards compared to when the interests of all agents are aligned.
arXiv Detail & Related papers (2022-05-04T21:14:03Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - On-the-fly Strategy Adaptation for ad-hoc Agent Coordination [21.029009561094725]
Training agents in cooperative settings offers the promise of AI agents able to interact effectively with humans (and other agents) in the real world.
The vast majority of focus has been on the self-play paradigm.
This paper proposes to solve this problem by adapting agent strategies on the fly, using a posterior belief over the other agents' strategy.
arXiv Detail & Related papers (2022-03-08T02:18:11Z) - Conditional Imitation Learning for Multi-Agent Games [89.897635970366]
We study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time.
We propose a novel approach to address the difficulties of scalability and data scarcity.
Our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace.
arXiv Detail & Related papers (2022-01-05T04:40:13Z) - Generating and Adapting to Diverse Ad-Hoc Cooperation Agents in Hanabi [4.777698073163644]
In Hanabi, coordinated groups of players can leverage pre-established conventions to great effect, but playing in an ad-hoc setting requires agents to adapt to its partner's strategies with no previous coordination.
This paper proposes Quality Diversity algorithms as a promising class of algorithms to generate diverse populations for this purpose.
We also postulate that agents can benefit from a diverse population during training and implement a simple "meta-strategy" for adapting to an agent's perceived behavioral niche.
arXiv Detail & Related papers (2020-04-28T05:03:19Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - On Emergent Communication in Competitive Multi-Agent Teams [116.95067289206919]
We investigate whether competition for performance from an external, similar agent team could act as a social influence.
Our results show that an external competitive influence leads to improved accuracy and generalization, as well as faster emergence of communicative languages.
arXiv Detail & Related papers (2020-03-04T01:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.