SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving
- URL: http://arxiv.org/abs/2010.09776v2
- Date: Sun, 1 Nov 2020 01:32:36 GMT
- Title: SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving
- Authors: Ming Zhou, Jun Luo, Julian Villella, Yaodong Yang, David Rusu, Jiayu
Miao, Weinan Zhang, Montgomery Alban, Iman Fadakar, Zheng Chen, Aurora
Chongxi Huang, Ying Wen, Kimia Hassanzadeh, Daniel Graves, Dong Chen,
Zhengbang Zhu, Nhat Nguyen, Mohamed Elsayed, Kun Shao, Sanjeevan Ahilan,
Baokuan Zhang, Jiannan Wu, Zhengang Fu, Kasra Rezaee, Peyman Yadmellat,
Mohsen Rohani, Nicolas Perez Nieves, Yihan Ni, Seyedershad Banijamali,
Alexander Cowen Rivers, Zheng Tian, Daniel Palenicek, Haitham bou Ammar,
Hongbo Zhang, Wulong Liu, Jianye Hao, Jun Wang
- Abstract summary: Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
- Score: 96.50297622371457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-agent interaction is a fundamental aspect of autonomous driving in the
real world. Despite more than a decade of research and development, the problem
of how to competently interact with diverse road users in diverse scenarios
remains largely unsolved. Learning methods have much to offer towards solving
this problem. But they require a realistic multi-agent simulator that generates
diverse and competent driving interactions. To meet this need, we develop a
dedicated simulation platform called SMARTS (Scalable Multi-Agent RL Training
School). SMARTS supports the training, accumulation, and use of diverse
behavior models of road users. These are in turn used to create increasingly
more realistic and diverse interactions that enable deeper and broader research
on multi-agent interaction. In this paper, we describe the design goals of
SMARTS, explain its basic architecture and its key features, and illustrate its
use through concrete multi-agent experiments on interactive scenarios. We
open-source the SMARTS platform and the associated benchmark tasks and
evaluation metrics to encourage and empower research on multi-agent learning
for autonomous driving. Our code is available at
https://github.com/huawei-noah/SMARTS.
Related papers
- OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization [66.22117723598872]
We introduce an open-source framework designed to facilitate the development of multimodal web agent.
We first train the base model with imitation learning to gain the basic abilities.
We then let the agent explore the open web and collect feedback on its trajectories.
arXiv Detail & Related papers (2024-10-25T15:01:27Z) - Multi-Agent Reinforcement Learning for Autonomous Driving: A Survey [14.73689900685646]
Reinforcement Learning (RL) is a potent tool for sequential decision-making and has achieved performance surpassing human capabilities.
As the extension of RL in the multi-agent system domain, multi-agent RL (MARL) not only need to learn the control policy but also requires consideration regarding interactions with all other agents in the environment.
Simulators are crucial to obtain realistic data, which is the fundamentals of RL.
arXiv Detail & Related papers (2024-08-19T03:31:20Z) - An Interactive Agent Foundation Model [49.77861810045509]
We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents.
Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction.
We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare.
arXiv Detail & Related papers (2024-02-08T18:58:02Z) - Drive Anywhere: Generalizable End-to-end Autonomous Driving with
Multi-modal Foundation Models [114.69732301904419]
We present an approach to apply end-to-end open-set (any environment/scene) autonomous driving that is capable of providing driving decisions from representations queryable by image and text.
Our approach demonstrates unparalleled results in diverse tests while achieving significantly greater robustness in out-of-distribution situations.
arXiv Detail & Related papers (2023-10-26T17:56:35Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - From Multi-agent to Multi-robot: A Scalable Training and Evaluation
Platform for Multi-robot Reinforcement Learning [12.74238738538799]
Multi-agent reinforcement learning (MARL) has been gaining extensive attention from academia and industries in the past few decades.
It remains unknown how these methods perform in real-world scenarios, especially multi-robot systems.
This paper introduces a scalable emulation platform for multi-robot reinforcement learning (MRRL) called SMART to meet this need.
arXiv Detail & Related papers (2022-06-20T06:36:45Z) - An Introduction to Multi-Agent Reinforcement Learning and Review of its
Application to Autonomous Mobility [1.496194593196997]
Multi-Agent Reinforcement Learning (MARL) is a research field that aims to find optimal solutions for multiple agents that interact with each other.
This work aims to give an overview of the field to researchers in autonomous mobility.
arXiv Detail & Related papers (2022-03-15T06:40:28Z) - Evaluating the Robustness of Deep Reinforcement Learning for Autonomous
Policies in a Multi-agent Urban Driving Environment [3.8073142980733]
We propose a benchmarking framework for the comparison of deep reinforcement learning in a vision-based autonomous driving.
We run the experiments in a vision-only high-fidelity urban driving simulated environments.
The results indicate that only some of the deep reinforcement learning algorithms perform consistently better across single and multi-agent scenarios.
arXiv Detail & Related papers (2021-12-22T15:14:50Z) - MetaDrive: Composing Diverse Driving Scenarios for Generalizable
Reinforcement Learning [25.191567110519866]
We develop a new driving simulation platform called MetaDrive for the study of reinforcement learning algorithms.
Based on MetaDrive, we construct a variety of RL tasks and baselines in both single-agent and multi-agent settings.
arXiv Detail & Related papers (2021-09-26T18:34:55Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.