Emergent collective intelligence from massive-agent cooperation and
competition
- URL: http://arxiv.org/abs/2301.01609v2
- Date: Thu, 5 Jan 2023 06:05:28 GMT
- Title: Emergent collective intelligence from massive-agent cooperation and
competition
- Authors: Hanmo Chen, Stone Tao, Jiaxin Chen, Weihan Shen, Xihui Li, Chenghui
Yu, Sikai Cheng, Xiaolong Zhu, Xiu Li
- Abstract summary: We study the emergence of artificial collective intelligence through massive-agent reinforcement learning.
We propose a new massive-agent reinforcement learning environment, Lux, where dynamic and massive agents in two teams scramble for limited resources and fight off the darkness.
- Score: 19.75488604218965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by organisms evolving through cooperation and competition between
different populations on Earth, we study the emergence of artificial collective
intelligence through massive-agent reinforcement learning. To this end, We
propose a new massive-agent reinforcement learning environment, Lux, where
dynamic and massive agents in two teams scramble for limited resources and
fight off the darkness. In Lux, we build our agents through the standard
reinforcement learning algorithm in curriculum learning phases and leverage
centralized control via a pixel-to-pixel policy network. As agents co-evolve
through self-play, we observe several stages of intelligence, from the
acquisition of atomic skills to the development of group strategies. Since
these learned group strategies arise from individual decisions without an
explicit coordination mechanism, we claim that artificial collective
intelligence emerges from massive-agent cooperation and competition. We further
analyze the emergence of various learned strategies through metrics and
ablation studies, aiming to provide insights for reinforcement learning
implementations in massive-agent environments.
Related papers
- Multi-agent cooperation through learning-aware policy gradients [53.63948041506278]
Self-interested individuals often fail to cooperate, posing a fundamental challenge for multi-agent learning.
We present the first unbiased, higher-derivative-free policy gradient algorithm for learning-aware reinforcement learning.
We derive from the iterated prisoner's dilemma a novel explanation for how and when cooperation arises among self-interested learning-aware agents.
arXiv Detail & Related papers (2024-10-24T10:48:42Z) - Collaborative AI Teaming in Unknown Environments via Active Goal Deduction [22.842601384114058]
Existing approaches for training collaborative agents often require defined and known reward signals.
We propose teaming with unknown agents framework, which leverages kernel density Bayesian inverse learning method for active goal deduction.
We prove that unbiased reward estimates in our framework are sufficient for optimal teaming with unknown agents.
arXiv Detail & Related papers (2024-03-22T16:50:56Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Mathematics of multi-agent learning systems at the interface of game
theory and artificial intelligence [0.8049333067399385]
Evolutionary Game Theory and Artificial Intelligence are two fields that, at first glance, might seem distinct, but they have notable connections and intersections.
The former focuses on the evolution of behaviors (or strategies) in a population, where individuals interact with others and update their strategies based on imitation (or social learning)
The latter, meanwhile, is centered on machine learning algorithms and (deep) neural networks.
arXiv Detail & Related papers (2024-03-09T17:36:54Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - Deep Reinforcement Learning for Multi-Agent Interaction [14.532965827043254]
The Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control.
This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
arXiv Detail & Related papers (2022-08-02T21:55:56Z) - Conditional Imitation Learning for Multi-Agent Games [89.897635970366]
We study the problem of conditional multi-agent imitation learning, where we have access to joint trajectory demonstrations at training time.
We propose a novel approach to address the difficulties of scalability and data scarcity.
Our model learns a low-rank subspace over ego and partner agent strategies, then infers and adapts to a new partner strategy by interpolating in the subspace.
arXiv Detail & Related papers (2022-01-05T04:40:13Z) - Multiagent Deep Reinforcement Learning: Challenges and Directions
Towards Human-Like Approaches [0.0]
We present the most common multiagent problem representations and their main challenges.
We identify five research areas that address one or more of these challenges.
We suggest that, for multiagent reinforcement learning to be successful, future research addresses these challenges with an interdisciplinary approach.
arXiv Detail & Related papers (2021-06-29T19:53:15Z) - Natural Emergence of Heterogeneous Strategies in Artificially
Intelligent Competitive Teams [0.0]
We develop a competitive multi agent environment called FortAttack in which two teams compete against each other.
We observe a natural emergence of heterogeneous behavior amongst homogeneous agents when such behavior can lead to the team's success.
We propose ensemble training, in which we utilize the evolved opponent strategies to train a single policy for friendly agents.
arXiv Detail & Related papers (2020-07-06T22:35:56Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.