MADRaS : Multi Agent Driving Simulator
- URL: http://arxiv.org/abs/2010.00993v1
- Date: Fri, 2 Oct 2020 13:38:49 GMT
- Title: MADRaS : Multi Agent Driving Simulator
- Authors: Anirban Santara, Sohan Rudra, Sree Aditya Buridi, Meha Kaushik,
Abhishek Naik, Bharat Kaul, Balaraman Ravindran
- Abstract summary: We present MADRaS, an open-source multiagent driving simulator for use in the design and evaluation of motion planning algorithms for autonomous driving.
MADRaS is built on TORCS, an open-source car-racing simulator.
- Score: 15.451658979433667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present MADRaS, an open-source multi-agent driving simulator
for use in the design and evaluation of motion planning algorithms for
autonomous driving. MADRaS provides a platform for constructing a wide variety
of highway and track driving scenarios where multiple driving agents can train
for motion planning tasks using reinforcement learning and other machine
learning algorithms. MADRaS is built on TORCS, an open-source car-racing
simulator. TORCS offers a variety of cars with different dynamic properties and
driving tracks with different geometries and surface properties. MADRaS
inherits these functionalities from TORCS and introduces support for
multi-agent training, inter-vehicular communication, noisy observations,
stochastic actions, and custom traffic cars whose behaviours can be programmed
to simulate challenging traffic conditions encountered in the real world.
MADRaS can be used to create driving tasks whose complexities can be tuned
along eight axes in well-defined steps. This makes it particularly suited for
curriculum and continual learning. MADRaS is lightweight and it provides a
convenient OpenAI Gym interface for independent control of each car. Apart from
the primitive steering-acceleration-brake control mode of TORCS, MADRaS offers
a hierarchical track-position -- speed control that can potentially be used to
achieve better generalization. MADRaS uses multiprocessing to run each agent as
a parallel process for efficiency and integrates well with popular
reinforcement learning libraries like RLLib.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving [25.49856190295859]
World model (WM) based reinforcement learning (RL) has emerged as a promising approach by learning and predicting the complex dynamics of various environments.
There does not exist an accessible platform for training and testing such algorithms in sophisticated driving environments.
We introduce CarDreamer, the first open-source learning platform designed specifically for developing WM based autonomous driving algorithms.
arXiv Detail & Related papers (2024-05-15T05:57:20Z) - Scenario-Based Curriculum Generation for Multi-Agent Autonomous Driving [7.277126044624995]
We introduce MATS-Gym, a Multi-Agent Traffic Scenario framework to train agents in CARLA, a high-fidelity driving simulator.
This paper unifies various existing approaches to traffic scenario description into a single training framework and demonstrates how it can be integrated with techniques from unsupervised environment design to automate the generation of adaptive auto-curricula.
arXiv Detail & Related papers (2024-03-26T15:42:04Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Comprehensive Training and Evaluation on Deep Reinforcement Learning for
Automated Driving in Various Simulated Driving Maneuvers [0.4241054493737716]
This study implements, evaluating, and comparing the two DRL algorithms, Deep Q-networks (DQN) and Trust Region Policy Optimization (TRPO)
Models trained on the designed ComplexRoads environment can adapt well to other driving maneuvers with promising overall performance.
arXiv Detail & Related papers (2023-06-20T11:41:01Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Fully End-to-end Autonomous Driving with Semantic Depth Cloud Mapping
and Multi-Agent [2.512827436728378]
We propose a novel deep learning model trained with end-to-end and multi-task learning manners to perform both perception and control tasks simultaneously.
The model is evaluated on CARLA simulator with various scenarios made of normal-adversarial situations and different weathers to mimic real-world conditions.
arXiv Detail & Related papers (2022-04-12T03:57:01Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.