AgentMixer: Multi-Agent Correlated Policy Factorization
- URL: http://arxiv.org/abs/2401.08728v1
- Date: Tue, 16 Jan 2024 15:32:41 GMT
- Title: AgentMixer: Multi-Agent Correlated Policy Factorization
- Authors: Zhiyuan Li, Wenshuai Zhao, Lijun Wu, Joni Pajarinen
- Abstract summary: We introduce textitstrategy modification to provide a mechanism for agents to correlate their policies.
We present a novel framework, AgentMixer, which constructs the joint fully observable policy as a non-linear combination of individual partially observable policies.
We show that AgentMixer converges to an $epsilon$-approximate Correlated Equilibrium.
- Score: 39.041191852287525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Centralized training with decentralized execution (CTDE) is widely employed
to stabilize partially observable multi-agent reinforcement learning (MARL) by
utilizing a centralized value function during training. However, existing
methods typically assume that agents make decisions based on their local
observations independently, which may not lead to a correlated joint policy
with sufficient coordination. Inspired by the concept of correlated
equilibrium, we propose to introduce a \textit{strategy modification} to
provide a mechanism for agents to correlate their policies. Specifically, we
present a novel framework, AgentMixer, which constructs the joint fully
observable policy as a non-linear combination of individual partially
observable policies. To enable decentralized execution, one can derive
individual policies by imitating the joint policy. Unfortunately, such
imitation learning can lead to \textit{asymmetric learning failure} caused by
the mismatch between joint policy and individual policy information. To
mitigate this issue, we jointly train the joint policy and individual policies
and introduce \textit{Individual-Global-Consistency} to guarantee mode
consistency between the centralized and decentralized policies. We then
theoretically prove that AgentMixer converges to an $\epsilon$-approximate
Correlated Equilibrium. The strong experimental performance on three MARL
benchmarks demonstrates the effectiveness of our method.
Related papers
- CoMIX: A Multi-agent Reinforcement Learning Training Architecture for Efficient Decentralized Coordination and Independent Decision-Making [2.4555276449137042]
Robust coordination skills enable agents to operate cohesively in shared environments, together towards a common goal and, ideally, individually without hindering each other's progress.
This paper presents Coordinated QMIX, a novel training framework for decentralized agents that enables emergent coordination through flexible policies, allowing at the same time independent decision-making at individual level.
arXiv Detail & Related papers (2023-08-21T13:45:44Z) - Inducing Stackelberg Equilibrium through Spatio-Temporal Sequential
Decision-Making in Multi-Agent Reinforcement Learning [17.101534531286298]
We construct a Nash-level policy model based on a conditional hypernetwork shared by all agents.
This approach allows for asymmetric training with symmetric execution, with each agent responding optimally conditioned on the decisions made by superior agents.
Experiments demonstrate that our method effectively converges to the SE policies in repeated matrix game scenarios.
arXiv Detail & Related papers (2023-04-20T14:47:54Z) - More Centralized Training, Still Decentralized Execution: Multi-Agent
Conditional Policy Factorization [21.10461189367695]
In cooperative multi-agent reinforcement learning (MARL), combining value decomposition with actor-critic enables agents learn policies.
Agents are commonly assumed to be independent of each other, even in centralized training.
We propose multi-agent conditional policy factorization (MACPF) which takes more centralized training but still enables decentralized execution.
arXiv Detail & Related papers (2022-09-26T13:29:22Z) - Multi-Agent MDP Homomorphic Networks [100.74260120972863]
In cooperative multi-agent systems, complex symmetries arise between different configurations of the agents and their local observations.
Existing work on symmetries in single agent reinforcement learning can only be generalized to the fully centralized setting.
This paper introduces Multi-Agent MDP Homomorphic Networks, a class of networks that allows distributed execution using only local information.
arXiv Detail & Related papers (2021-10-09T07:46:25Z) - Dealing with Non-Stationarity in Multi-Agent Reinforcement Learning via
Trust Region Decomposition [52.06086375833474]
Non-stationarity is one thorny issue in multi-agent reinforcement learning.
We introduce a $delta$-stationarity measurement to explicitly model the stationarity of a policy sequence.
We propose a trust region decomposition network based on message passing to estimate the joint policy divergence.
arXiv Detail & Related papers (2021-02-21T14:46:50Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Monotonic Value Function Factorisation for Deep Multi-Agent
Reinforcement Learning [55.20040781688844]
QMIX is a novel value-based method that can train decentralised policies in a centralised end-to-end fashion.
We propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2020-03-19T16:51:51Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.