Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing
- URL: http://arxiv.org/abs/2402.02097v2
- Date: Sat, 10 Aug 2024 06:45:19 GMT
- Title: Settling Decentralized Multi-Agent Coordinated Exploration by Novelty Sharing
- Authors: Haobin Jiang, Ziluo Ding, Zongqing Lu,
- Abstract summary: We propose MACE, a simple yet effective multi-agent coordinated exploration method.
By communicating only local novelty, agents can take into account other agents' local novelty to approximate the global novelty.
We show that MACE achieves superior performance in three multi-agent environments with sparse rewards.
- Score: 34.299478481229265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exploration in decentralized cooperative multi-agent reinforcement learning faces two challenges. One is that the novelty of global states is unavailable, while the novelty of local observations is biased. The other is how agents can explore in a coordinated way. To address these challenges, we propose MACE, a simple yet effective multi-agent coordinated exploration method. By communicating only local novelty, agents can take into account other agents' local novelty to approximate the global novelty. Further, we newly introduce weighted mutual information to measure the influence of one agent's action on other agents' accumulated novelty. We convert it as an intrinsic reward in hindsight to encourage agents to exert more influence on other agents' exploration and boost coordinated exploration. Empirically, we show that MACE achieves superior performance in three multi-agent environments with sparse rewards.
Related papers
- Self-Motivated Multi-Agent Exploration [38.55811936029999]
In cooperative multi-agent reinforcement learning (CMARL), it is critical for agents to achieve a balance between self-exploration and team collaboration.
Recent works mainly concentrate on agents' coordinated exploration, which brings about the exponentially grown exploration of the state space.
We propose Self-Motivated Multi-Agent Exploration (SMMAE), which aims to achieve success in team tasks by adaptively finding a trade-off between self-exploration and team cooperation.
arXiv Detail & Related papers (2023-01-05T14:42:39Z) - Curiosity-Driven Multi-Agent Exploration with Mixed Objectives [7.247148291603988]
Intrinsic rewards have been increasingly used to mitigate the sparse reward problem in single-agent reinforcement learning.
Curiosity-driven exploration is a simple yet efficient approach that quantifies this novelty as the prediction error of the agent's curiosity module.
We show here, however, that naively using this curiosity-driven approach to guide exploration in sparse reward cooperative multi-agent environments does not consistently lead to improved results.
arXiv Detail & Related papers (2022-10-29T02:45:38Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z) - MUI-TARE: Multi-Agent Cooperative Exploration with Unknown Initial
Position [12.921108151387696]
We develop a new approach for lidar-based multi-agent exploration based on the quality indicator of the sub-map merging process.
Our approach is up to 50% more efficient than the baselines on average while merging sub-maps robustly.
arXiv Detail & Related papers (2022-09-22T04:33:02Z) - Cooperative Exploration for Multi-Agent Deep Reinforcement Learning [127.4746863307944]
We propose cooperative multi-agent exploration (CMAE) for deep reinforcement learning.
The goal is selected from multiple projected state spaces via a normalized entropy-based technique.
We demonstrate that CMAE consistently outperforms baselines on various tasks.
arXiv Detail & Related papers (2021-07-23T20:06:32Z) - Cooperative Heterogeneous Deep Reinforcement Learning [47.97582814287474]
We present a Cooperative Heterogeneous Deep Reinforcement Learning framework that can learn a policy by integrating the advantages of heterogeneous agents.
Global agents are off-policy agents that can utilize experiences from the other agents.
Local agents are either on-policy agents or population-based evolutionary (EAs) agents that can explore the local area effectively.
arXiv Detail & Related papers (2020-11-02T07:39:09Z) - UneVEn: Universal Value Exploration for Multi-Agent Reinforcement
Learning [53.73686229912562]
We propose a novel MARL approach called Universal Value Exploration (UneVEn)
UneVEn learns a set of related tasks simultaneously with a linear decomposition of universal successor features.
Empirical results on a set of exploration games, challenging cooperative predator-prey tasks requiring significant coordination among agents, and StarCraft II micromanagement benchmarks show that UneVEn can solve tasks where other state-of-the-art MARL methods fail.
arXiv Detail & Related papers (2020-10-06T19:08:47Z) - Learning to Incentivize Other Learning Agents [73.03133692589532]
We show how to equip RL agents with the ability to give rewards directly to other agents, using a learned incentive function.
Such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games.
Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.
arXiv Detail & Related papers (2020-06-10T20:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.