DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution
Matching
- URL: http://arxiv.org/abs/2206.00233v1
- Date: Wed, 1 Jun 2022 04:57:50 GMT
- Title: DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution
Matching
- Authors: Caroline Wang, Ishan Durugkar, Elad Liebman, Peter Stone
- Abstract summary: This paper studies the problem of distributed multi-agent learning without resorting to explicit coordination schemes.
Each individual agent matches a target distribution of concurrently sampled trajectories from a joint expert policy.
Experimental validation on the StarCraft domain shows that combining the reward for distribution matching with the environment reward allows agents to outperform a fully distributed baseline.
- Score: 43.58408474941208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current approaches to multi-agent cooperation rely heavily on centralized
mechanisms or explicit communication protocols to ensure convergence. This
paper studies the problem of distributed multi-agent learning without resorting
to explicit coordination schemes. The proposed algorithm (DM$^2$) leverages
distribution matching to facilitate independent agents' coordination. Each
individual agent matches a target distribution of concurrently sampled
trajectories from a joint expert policy. The theoretical analysis shows that
under some conditions, if each agent optimizes their individual distribution
matching objective, the agents increase a lower bound on the objective of
matching the joint expert policy, allowing convergence to the joint expert
policy. Further, if the distribution matching objective is aligned with a joint
task, a combination of environment reward and distribution matching reward
leads to the same equilibrium. Experimental validation on the StarCraft domain
shows that combining the reward for distribution matching with the environment
reward allows agents to outperform a fully distributed baseline. Additional
experiments probe the conditions under which expert demonstrations need to be
sampled in order to outperform the fully distributed baseline.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Reaching Consensus in Cooperative Multi-Agent Reinforcement Learning
with Goal Imagination [16.74629849552254]
We propose a model-based consensus mechanism to explicitly coordinate multiple agents.
The proposed Multi-agent Goal Imagination (MAGI) framework guides agents to reach consensus with an Imagined common goal.
We show that such efficient consensus mechanism can guide all agents cooperatively reaching valuable future states.
arXiv Detail & Related papers (2024-03-05T18:07:34Z) - Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria [0.0]
This paper investigates strategies to invoke cooperation in game-theoretic scenarios, namely the Iterated Prisoner's Dilemma.
Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games.
The study extends to scenarios with exponentially growing agent populations.
arXiv Detail & Related papers (2023-09-28T08:57:01Z) - Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent
Reinforcement Learning [9.290757451344673]
We present a risk-based exploration that leads to collaboratively optimistic behavior by shifting the sampling region of distribution.
Our method shows remarkable performance in multi-agent settings requiring cooperative exploration based on quantile regression.
arXiv Detail & Related papers (2023-03-03T08:17:57Z) - DQMIX: A Distributional Perspective on Multi-Agent Reinforcement
Learning [122.47938710284784]
In cooperative multi-agent tasks, a team of agents jointly interact with an environment by taking actions, receiving a reward and observing the next state.
Most of the existing value-based multi-agent reinforcement learning methods only model the expectations of individual Q-values and global Q-value.
arXiv Detail & Related papers (2022-02-21T11:28:00Z) - Distributional Reinforcement Learning for Multi-Dimensional Reward
Functions [91.88969237680669]
We introduce Multi-Dimensional Distributional DQN (MD3QN) to model the joint return distribution from multiple reward sources.
As a by-product of joint distribution modeling, MD3QN can capture the randomness in returns for each source of reward.
In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions.
arXiv Detail & Related papers (2021-10-26T11:24:23Z) - Convergence Rates of Average-Reward Multi-agent Reinforcement Learning
via Randomized Linear Programming [41.30044824711509]
We focus on the case that the global reward is a sum of local rewards, the joint policy factorizes into agents' marginals, and full state observability.
We develop multi-agent extensions, whereby agents solve their local saddle point problems and then perform local weighted averaging.
We establish that the sample complexity to obtain near-globally optimal solutions matches tight dependencies on the cardinality of the state and action spaces.
arXiv Detail & Related papers (2021-10-22T03:48:41Z) - Robust Learning of Optimal Auctions [84.13356290199603]
We study the problem of learning revenue-optimal multi-bidder auctions from samples when the samples of bidders' valuations can be adversarially corrupted or drawn from distributions that are adversarially perturbed.
We propose new algorithms that can learn a mechanism whose revenue is nearly optimal simultaneously for all true distributions'' that are $alpha$-close to the original distribution in Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2021-07-13T17:37:21Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.