An Introduction to Multi-Agent Reinforcement Learning and Review of its
Application to Autonomous Mobility
- URL: http://arxiv.org/abs/2203.07676v1
- Date: Tue, 15 Mar 2022 06:40:28 GMT
- Title: An Introduction to Multi-Agent Reinforcement Learning and Review of its
Application to Autonomous Mobility
- Authors: Lukas M. Schmidt, Johanna Brosig, Axel Plinge, Bjoern M. Eskofier,
Christopher Mutschler
- Abstract summary: Multi-Agent Reinforcement Learning (MARL) is a research field that aims to find optimal solutions for multiple agents that interact with each other.
This work aims to give an overview of the field to researchers in autonomous mobility.
- Score: 1.496194593196997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many scenarios in mobility and traffic involve multiple different agents that
need to cooperate to find a joint solution. Recent advances in behavioral
planning use Reinforcement Learning to find effective and performant behavior
strategies. However, as autonomous vehicles and vehicle-to-X communications
become more mature, solutions that only utilize single, independent agents
leave potential performance gains on the road. Multi-Agent Reinforcement
Learning (MARL) is a research field that aims to find optimal solutions for
multiple agents that interact with each other. This work aims to give an
overview of the field to researchers in autonomous mobility. We first explain
MARL and introduce important concepts. Then, we discuss the central paradigms
that underlie MARL algorithms, and give an overview of state-of-the-art methods
and ideas in each paradigm. With this background, we survey applications of
MARL in autonomous mobility scenarios and give an overview of existing
scenarios and implementations.
Related papers
- Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.
However, they still struggle with problems requiring multi-step decision-making and environmental feedback.
We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - MALT: Improving Reasoning with Multi-Agent LLM Training [64.13803241218886]
We present a first step toward "Multi-agent LLM training" (MALT) on reasoning problems.
Our approach employs a sequential multi-agent setup with heterogeneous LLMs assigned specialized roles.
We evaluate our approach across MATH, GSM8k, and CQA, where MALT on Llama 3.1 8B models achieves relative improvements of 14.14%, 7.12%, and 9.40% respectively.
arXiv Detail & Related papers (2024-12-02T19:30:36Z) - KoMA: Knowledge-driven Multi-agent Framework for Autonomous Driving with Large Language Models [15.951550445568605]
Large language models (LLMs) as autonomous agents offer a novel avenue for tackling real-world challenges through a knowledge-driven manner.
We propose the KoMA framework consisting of multi-agent interaction, multi-step planning, shared-memory, and ranking-based reflection modules.
arXiv Detail & Related papers (2024-07-19T12:13:08Z) - A Distributed Approach to Autonomous Intersection Management via Multi-Agent Reinforcement Learning [4.659033572014701]
We show that by leveraging the 3D surround view technology for advanced assistance systems, autonomous vehicles can accurately navigate intersection scenarios without needing any centralised controller.
We validate our approach as an innovative alternative to centralised conventional AIM techniques, ensuring the full efficacy of our results.
arXiv Detail & Related papers (2024-05-14T14:34:24Z) - Beyond One Model Fits All: Ensemble Deep Learning for Autonomous
Vehicles [16.398646583844286]
This study introduces three distinct neural network models corresponding to Mediated Perception, Behavior Reflex, and Direct Perception approaches.
Our architecture fuses information from the base, future latent vector prediction, and auxiliary task networks, using global routing commands to select appropriate action sub-networks.
arXiv Detail & Related papers (2023-12-10T04:40:02Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Collaborative Visual Navigation [69.20264563368762]
We propose a large-scale 3D dataset, CollaVN, for multi-agent visual navigation (MAVN)
Diverse MAVN variants are explored to make our problem more general.
A memory-augmented communication framework is proposed. Each agent is equipped with a private, external memory to persistently store communication information.
arXiv Detail & Related papers (2021-07-02T15:48:16Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z) - MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive
Strategies for Urban Autonomous Navigation [22.594295184455]
This paper builds a reinforcement learning-based method named MIDAS where an ego-agent learns to affect the control actions of other cars.
MIDAS is validated using extensive experiments and we show that it (i) can work across different road geometries, (ii) is robust to changes in the driving policies of external agents, and (iv) is more efficient and safer than existing approaches to interaction-aware decision-making.
arXiv Detail & Related papers (2020-08-17T04:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.