Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem
- URL: http://arxiv.org/abs/2309.10007v2
- Date: Sat, 30 Sep 2023 08:33:37 GMT
- Title: Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem
- Authors: Tanmay Vilas Samak, Chinmay Vilas Samak and Venkat Krovi
- Abstract summary: We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
- Score: 1.1893676124374688
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a modular and parallelizable multi-agent deep
reinforcement learning framework for imbibing cooperative as well as
competitive behaviors within autonomous vehicles. We introduce AutoDRIVE
Ecosystem as an enabler to develop physically accurate and graphically
realistic digital twins of Nigel and F1TENTH, two scaled autonomous vehicle
platforms with unique qualities and capabilities, and leverage this ecosystem
to train and deploy multi-agent reinforcement learning policies. We first
investigate an intersection traversal problem using a set of cooperative
vehicles (Nigel) that share limited state information with each other in single
as well as multi-agent learning settings using a common policy approach. We
then investigate an adversarial head-to-head autonomous racing problem using a
different set of vehicles (F1TENTH) in a multi-agent learning setting using an
individual policy approach. In either set of experiments, a decentralized
learning architecture was adopted, which allowed robust training and testing of
the approaches in stochastic environments, since the agents were mutually
independent and exhibited asynchronous motion behavior. The problems were
further aggravated by providing the agents with sparse observation spaces and
requiring them to sample control commands that implicitly satisfied the imposed
kinodynamic as well as safety constraints. The experimental results for both
problem statements are reported in terms of quantitative metrics and
qualitative remarks for training as well as deployment phases.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Path Following and Stabilisation of a Bicycle Model using a Reinforcement Learning Approach [0.0]
This work introduces an RL approach to do path following with a virtual bicycle model while simultaneously stabilising it laterally.
The agent succeeds in both path following and stabilisation of the bicycle model exclusively by outputting steering angles.
The performance of the deployed agents is evaluated using different types of paths and measurements.
arXiv Detail & Related papers (2024-07-24T10:54:23Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - A New Approach to Training Multiple Cooperative Agents for Autonomous
Driving [5.1930091960850415]
This paper proposes Lepus, a new approach to training multiple agents.
Lepus pre-trains the policy networks via an adversarial process.
For alleviating the problem of sparse rewards, Lepus learns an approximate reward function from expert trajectories.
arXiv Detail & Related papers (2022-09-05T22:35:33Z) - Multi-Agent Car Parking using Reinforcement Learning [0.0]
This study applies reinforcement learning to the problem of multi-agent car parking.
We design and implement a flexible car parking environment in the form of a Markov decision process with independent learners.
We obtain models parking up to 7 cars with over a 98.1% success rate, significantly beating existing single-agent models.
arXiv Detail & Related papers (2022-06-22T16:50:04Z) - Evaluating the Robustness of Deep Reinforcement Learning for Autonomous
Policies in a Multi-agent Urban Driving Environment [3.8073142980733]
We propose a benchmarking framework for the comparison of deep reinforcement learning in a vision-based autonomous driving.
We run the experiments in a vision-only high-fidelity urban driving simulated environments.
The results indicate that only some of the deep reinforcement learning algorithms perform consistently better across single and multi-agent scenarios.
arXiv Detail & Related papers (2021-12-22T15:14:50Z) - Decentralized Motion Planning for Multi-Robot Navigation using Deep
Reinforcement Learning [0.41998444721319217]
This work presents a decentralized motion planning framework for addressing the task of multi-robot navigation using deep reinforcement learning.
The notion of decentralized motion planning with common and shared policy learning was adopted, which allowed robust training and testing of this approach.
arXiv Detail & Related papers (2020-11-11T07:35:21Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.