Efficient Cooperation Strategy Generation in Multi-Agent Video Games via
Hypergraph Neural Network
- URL: http://arxiv.org/abs/2203.03265v1
- Date: Mon, 7 Mar 2022 10:34:40 GMT
- Title: Efficient Cooperation Strategy Generation in Multi-Agent Video Games via
Hypergraph Neural Network
- Authors: Bin Zhang, Yunpeng Bai, Zhiwei Xu, Dapeng Li, Guoliang Fan
- Abstract summary: The performance of deep reinforcement learning in single-agent video games is astounding.
However, researchers have extra difficulties while working with video games in multi-agent environments.
We propose a novel algorithm based on the actor-critic method, which adapts the hypergraph structure of agents and employs hypergraph convolution to complete information feature extraction and representation between agents.
- Score: 16.226702761758595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of deep reinforcement learning (DRL) in single-agent video
games is astounding due to its benefits in dealing with sequential
decision-making challenges. However, researchers have extra difficulties while
working with video games in multi-agent environments. One of the most pressing
issues presently being addressed is how to create sufficient collaboration
between different agents in a scenario with numerous agents. To address this
issue, we propose a novel algorithm based on the actor-critic method, which
adapts the hypergraph structure of agents and employs hypergraph convolution to
complete information feature extraction and representation between agents,
resulting in efficient collaboration. Based on distinct generating methods of
hypergraph structure, HGAC and ATT-HGAC algorithms are given. We demonstrate
the advantages of our approach over other existing methods. Ablation and
visualization studies also confirm the relevance of each component of the
algorithm.
Related papers
- Textualized Agent-Style Reasoning for Complex Tasks by Multiple Round LLM Generation [49.27250832754313]
We present AgentCOT, a llm-based autonomous agent framework.
At each step, AgentCOT selects an action and executes it to yield an intermediate result with supporting evidence.
We introduce two new strategies to enhance the performance of AgentCOT.
arXiv Detail & Related papers (2024-09-19T02:20:06Z) - Scaling Large-Language-Model-based Multi-Agent Collaboration [75.5241464256688]
Pioneering advancements in large language model-powered agents have underscored the design pattern of multi-agent collaboration.
Inspired by the neural scaling law, this study investigates whether a similar principle applies to increasing agents in multi-agent collaboration.
arXiv Detail & Related papers (2024-06-11T11:02:04Z) - CCA: Collaborative Competitive Agents for Image Editing [59.54347952062684]
This paper presents a novel generative model, Collaborative Competitive Agents (CCA)
It leverages the capabilities of multiple Large Language Models (LLMs) based agents to execute complex tasks.
The paper's main contributions include the introduction of a multi-agent-based generative model with controllable intermediate steps and iterative optimization.
arXiv Detail & Related papers (2024-01-23T11:46:28Z) - Recursive Reasoning Graph for Multi-Agent Reinforcement Learning [44.890087638530524]
Multi-agent reinforcement learning (MARL) provides an efficient way for simultaneously learning policies for multiple agents interacting with each other.
Existing algorithms can suffer from an inability to accurately anticipate the influence of self-actions on other agents.
The proposed algorithm, referred to as the Recursive Reasoning Graph (R2G), shows state-of-the-art performance on multiple multi-agent particle and robotics games.
arXiv Detail & Related papers (2022-03-06T00:57:50Z) - Value Function Factorisation with Hypergraph Convolution for Cooperative
Multi-agent Reinforcement Learning [32.768661516953344]
We propose a method that combines hypergraph convolution with value decomposition.
By treating action values as signals, HGCN-Mix aims to explore the relationship between these signals via a self-learning hypergraph.
Experimental results present that HGCN-Mix matches or surpasses state-of-the-art techniques in the StarCraft II multi-agent challenge (SMAC) benchmark.
arXiv Detail & Related papers (2021-12-09T08:40:38Z) - MACRPO: Multi-Agent Cooperative Recurrent Policy Optimization [17.825845543579195]
We propose a new multi-agent actor-critic method called textitMulti-Agent Cooperative Recurrent Proximal Policy Optimization (MACRPO)
We use a recurrent layer in critic's network architecture and propose a new framework to use a meta-trajectory to train the recurrent layer.
We evaluate our algorithm on three challenging multi-agent environments with continuous and discrete action spaces.
arXiv Detail & Related papers (2021-09-02T12:43:35Z) - Cooperative Exploration for Multi-Agent Deep Reinforcement Learning [127.4746863307944]
We propose cooperative multi-agent exploration (CMAE) for deep reinforcement learning.
The goal is selected from multiple projected state spaces via a normalized entropy-based technique.
We demonstrate that CMAE consistently outperforms baselines on various tasks.
arXiv Detail & Related papers (2021-07-23T20:06:32Z) - Learning Multi-Granular Hypergraphs for Video-Based Person
Re-Identification [110.52328716130022]
Video-based person re-identification (re-ID) is an important research topic in computer vision.
We propose a novel graph-based framework, namely Multi-Granular Hypergraph (MGH) to better representational capabilities.
90.0% top-1 accuracy on MARS is achieved using MGH, outperforming the state-of-the-arts schemes.
arXiv Detail & Related papers (2021-04-30T11:20:02Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Learning to Coordinate via Multiple Graph Neural Networks [16.226702761758595]
MGAN is a new algorithm that combines graph convolutional networks and value-decomposition methods.
We show the amazing ability of the graph network in representation learning by visualizing the output of the graph network.
arXiv Detail & Related papers (2021-04-08T04:33:00Z) - A Visual Communication Map for Multi-Agent Deep Reinforcement Learning [7.003240657279981]
Multi-agent learning poses significant challenges in the effort to allocate a concealed communication medium.
Recent studies typically combine a specialized neural network with reinforcement learning to enable communication between agents.
This paper proposes a more scalable approach that not only deals with a great number of agents but also enables collaboration between dissimilar functional agents.
arXiv Detail & Related papers (2020-02-27T02:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.