MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning
- URL: http://arxiv.org/abs/2407.16312v2
- Date: Sun, 27 Oct 2024 17:55:41 GMT
- Title: MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning
- Authors: Florian Felten, Umut Ucak, Hicham Azmani, Gao Peng, Willem Röpke, Hendrik Baier, Patrick Mannion, Diederik M. Roijers, Jordan K. Terry, El-Ghazali Talbi, Grégoire Danoy, Ann Nowé, Roxana Rădulescu,
- Abstract summary: Multi-objective multi-agent reinforcement learning (MOMARL) addresses problems with multiple agents each needing to consider multiple objectives in their learning process.
MOAland is the first collection of standardised environments for multi-objective multi-agent reinforcement learning.
- Score: 7.822825134714791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many challenging tasks such as managing traffic systems, electricity grids, or supply chains involve complex decision-making processes that must balance multiple conflicting objectives and coordinate the actions of various independent decision-makers (DMs). One perspective for formalising and addressing such tasks is multi-objective multi-agent reinforcement learning (MOMARL). MOMARL broadens reinforcement learning (RL) to problems with multiple agents each needing to consider multiple objectives in their learning process. In reinforcement learning research, benchmarks are crucial in facilitating progress, evaluation, and reproducibility. The significance of benchmarks is underscored by the existence of numerous benchmark frameworks developed for various RL paradigms, including single-agent RL (e.g., Gymnasium), multi-agent RL (e.g., PettingZoo), and single-agent multi-objective RL (e.g., MO-Gymnasium). To support the advancement of the MOMARL field, we introduce MOMAland, the first collection of standardised environments for multi-objective multi-agent reinforcement learning. MOMAland addresses the need for comprehensive benchmarking in this emerging field, offering over 10 diverse environments that vary in the number of agents, state representations, reward structures, and utility considerations. To provide strong baselines for future research, MOMAland also includes algorithms capable of learning policies in such settings.
Related papers
- Enhancing Heterogeneous Multi-Agent Cooperation in Decentralized MARL via GNN-driven Intrinsic Rewards [1.179778723980276]
Multi-agent Reinforcement Learning (MARL) is emerging as a key framework for sequential decision-making and control tasks.
The deployment of these systems in real-world scenarios often requires decentralized training, a diverse set of agents, and learning from infrequent environmental reward signals.
We propose the CoHet algorithm, which utilizes a novel Graph Neural Network (GNN) based intrinsic motivation to facilitate the learning of heterogeneous agent policies.
arXiv Detail & Related papers (2024-08-12T21:38:40Z) - Needle In A Multimodal Haystack [79.81804334634408]
We present the first benchmark specifically designed to evaluate the capability of existing MLLMs to comprehend long multimodal documents.
Our benchmark includes three types of evaluation tasks: multimodal retrieval, counting, and reasoning.
We observe that existing models still have significant room for improvement on these tasks, especially on vision-centric evaluation.
arXiv Detail & Related papers (2024-06-11T13:09:16Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - Federated Multi-Objective Learning [22.875284692358683]
We propose a new federated multi-objective learning (FMOL) framework with multiple clients.
Our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications.
For this FMOL framework, we propose two new federated multi-task optimization (FMOO) algorithms called federated multi-gradient descent averaging (FSMGDA) and federated multi-gradient descent averaging (FSMGDA)
arXiv Detail & Related papers (2023-10-15T15:45:51Z) - MM-BigBench: Evaluating Multimodal Models on Multimodal Content
Comprehension Tasks [56.60050181186531]
We introduce MM-BigBench, which incorporates a diverse range of metrics to offer an extensive evaluation of the performance of various models and instructions.
Our paper evaluates a total of 20 language models (14 MLLMs) on 14 multimodal datasets spanning 6 tasks, with 10 instructions for each task, and derives novel insights.
arXiv Detail & Related papers (2023-10-13T11:57:04Z) - A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory
Management [16.808873433821464]
Multi-agent reinforcement learning (MARL) models multiple agents that interact and learn within a shared environment.
Applying MARL to real-world scenarios is impeded by many challenges such as scaling up, complex agent interactions, and non-stationary dynamics.
arXiv Detail & Related papers (2023-06-13T05:22:30Z) - CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous
Multi-Agent Reinforcement Learning [15.686200550604815]
We introduce a benchmark dataset with tasks involving collaboration between multiple simulated heterogeneous robots in a rich multi-room home environment.
We provide an integrated learning framework, multimodal implementations of state-of-the-art multi-agent reinforcement learning techniques, and a consistent evaluation protocol.
arXiv Detail & Related papers (2022-08-26T02:21:31Z) - Policy Diagnosis via Measuring Role Diversity in Cooperative Multi-agent
RL [107.58821842920393]
We quantify the agent's behavior difference and build its relationship with the policy performance via bf Role Diversity
We find that the error bound in MARL can be decomposed into three parts that have a strong relation to the role diversity.
The decomposed factors can significantly impact policy optimization on three popular directions.
arXiv Detail & Related papers (2022-06-01T04:58:52Z) - A Unified Multi-task Learning Framework for Multi-goal Conversational
Recommender Systems [91.70511776167488]
Four tasks are often involved in MG-CRS, including Goal Planning, Topic Prediction, Item Recommendation, and Response Generation.
We propose a novel Unified MultI-goal conversational recommeNDer system, namely UniMIND.
Prompt-based learning strategies are investigated to endow the unified model with the capability of multi-task learning.
arXiv Detail & Related papers (2022-04-14T12:31:27Z) - Regularize! Don't Mix: Multi-Agent Reinforcement Learning without
Explicit Centralized Structures [8.883885464358737]
We propose using regularization for Multi-Agent Reinforcement Learning rather than learning explicit cooperative structures called em Multi-Agent Regularized Q-learning (MARQ)
Our algorithm is evaluated on several benchmark multi-agent environments and we show that MARQ consistently outperforms several baselines and state-of-the-art algorithms.
arXiv Detail & Related papers (2021-09-19T00:58:38Z) - MALib: A Parallel Framework for Population-based Multi-agent
Reinforcement Learning [61.28547338576706]
Population-based multi-agent reinforcement learning (PB-MARL) refers to the series of methods nested with reinforcement learning (RL) algorithms.
We present MALib, a scalable and efficient computing framework for PB-MARL.
arXiv Detail & Related papers (2021-06-05T03:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.