Model-based Multi-agent Reinforcement Learning: Recent Progress and
Prospects
- URL: http://arxiv.org/abs/2203.10603v1
- Date: Sun, 20 Mar 2022 17:24:47 GMT
- Title: Model-based Multi-agent Reinforcement Learning: Recent Progress and
Prospects
- Authors: Xihuai Wang, Zhicheng Zhang, Weinan Zhang
- Abstract summary: Multi-Agent Reinforcement Learning (MARL) tackles sequential decision-making problems involving multiple participants.
MARL requires a tremendous number of samples for effective training.
Model-based methods have been shown to achieve provable advantages of sample efficiency.
- Score: 23.347535672670688
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Significant advances have recently been achieved in Multi-Agent Reinforcement
Learning (MARL) which tackles sequential decision-making problems involving
multiple participants. However, MARL requires a tremendous number of samples
for effective training. On the other hand, model-based methods have been shown
to achieve provable advantages of sample efficiency. However, the attempts of
model-based methods to MARL have just started very recently. This paper
presents a review of the existing research on model-based MARL, including
theoretical analyses, algorithms, and applications, and analyzes the advantages
and potential of model-based MARL. Specifically, we provide a detailed taxonomy
of the algorithms and point out the pros and cons for each algorithm according
to the challenges inherent to multi-agent scenarios. We also outline promising
directions for future development of this field.
Related papers
- MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs [97.94579295913606]
Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia.
In the development process, evaluation is critical since it provides intuitive feedback and guidance on improving models.
This work aims to offer researchers an easy grasp of how to effectively evaluate MLLMs according to different needs and to inspire better evaluation methods.
arXiv Detail & Related papers (2024-11-22T18:59:54Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - A Survey on Multimodal Benchmarks: In the Era of Large AI Models [13.299775710527962]
Multimodal Large Language Models (MLLMs) have brought substantial advancements in artificial intelligence.
This survey systematically reviews 211 benchmarks that assess MLLMs across four core domains: understanding, reasoning, generation, and application.
arXiv Detail & Related papers (2024-09-21T15:22:26Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Representation Learning For Efficient Deep Multi-Agent Reinforcement Learning [10.186029242664931]
We present MAPO-LSO which applies a form of comprehensive representation learning devised to supplement MARL training.
Specifically, MAPO-LSO proposes a multi-agent extension of transition dynamics reconstruction and self-predictive learning.
Empirical results demonstrate MAPO-LSO to show notable improvements in sample efficiency and learning performance compared to its vanilla MARL counterpart.
arXiv Detail & Related papers (2024-06-05T03:11:44Z) - Efficient Multi-agent Reinforcement Learning by Planning [33.51282615335009]
Multi-agent reinforcement learning (MARL) algorithms have accomplished remarkable breakthroughs in solving large-scale decision-making tasks.
Most existing MARL algorithms are model-free, limiting sample efficiency and hindering their applicability in more challenging scenarios.
We propose the MAZero algorithm, which combines a centralized model with Monte Carlo Tree Search (MCTS) for policy search.
arXiv Detail & Related papers (2024-05-20T04:36:02Z) - Model Composition for Multimodal Large Language Models [71.5729418523411]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - ESP: Exploiting Symmetry Prior for Multi-Agent Reinforcement Learning [22.733348449818838]
Multi-agent reinforcement learning (MARL) has achieved promising results in recent years.
This paper proposes a framework for exploiting prior knowledge by integrating data augmentation and a well-designed consistency loss.
arXiv Detail & Related papers (2023-07-30T09:49:05Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.