Multi-Agent Car Parking using Reinforcement Learning
- URL: http://arxiv.org/abs/2206.13338v1
- Date: Wed, 22 Jun 2022 16:50:04 GMT
- Title: Multi-Agent Car Parking using Reinforcement Learning
- Authors: Omar Tanner
- Abstract summary: This study applies reinforcement learning to the problem of multi-agent car parking.
We design and implement a flexible car parking environment in the form of a Markov decision process with independent learners.
We obtain models parking up to 7 cars with over a 98.1% success rate, significantly beating existing single-agent models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the industry of autonomous driving grows, so does the potential
interaction of groups of autonomous cars. Combined with the advancement of
Artificial Intelligence and simulation, such groups can be simulated, and
safety-critical models can be learned controlling the cars within. This study
applies reinforcement learning to the problem of multi-agent car parking, where
groups of cars aim to efficiently park themselves, while remaining safe and
rational. Utilising robust tools and machine learning frameworks, we design and
implement a flexible car parking environment in the form of a Markov decision
process with independent learners, exploiting multi-agent communication. We
implement a suite of tools to perform experiments at scale, obtaining models
parking up to 7 cars with over a 98.1% success rate, significantly beating
existing single-agent models. We also obtain several results relating to
competitive and collaborative behaviours exhibited by the cars in our
environment, with varying densities and levels of communication. Notably, we
discover a form of collaboration that cannot arise without competition, and a
'leaky' form of collaboration whereby agents collaborate without sufficient
state. Such work has numerous potential applications in the autonomous driving
and fleet management industries, and provides several useful techniques and
benchmarks for the application of reinforcement learning to multi-agent car
parking.
Related papers
- WHALES: A Multi-agent Scheduling Dataset for Enhanced Cooperation in Autonomous Driving [54.365702251769456]
We present dataset with unprecedented average of 8.4 agents per driving sequence.
In addition to providing the largest number of agents and viewpoints among autonomous driving datasets, WHALES records agent behaviors.
We conduct experiments on agent scheduling task, where the ego agent selects one of multiple candidate agents to cooperate with.
arXiv Detail & Related papers (2024-11-20T14:12:34Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - AgentsCoDriver: Large Language Model Empowered Collaborative Driving with Lifelong Learning [9.456294912296219]
Current autonomous driving systems exhibit deficiencies in interpretability, generalization, and continuing learning capabilities.
We leverage large language models (LLMs) to develop a novel framework, AgentsCoDriver, to enable multiple vehicles to conduct collaborative driving.
arXiv Detail & Related papers (2024-04-09T14:33:16Z) - AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning [54.47116888545878]
AutoAct is an automatic agent learning framework for QA.
It does not rely on large-scale annotated data and synthetic planning trajectories from closed-source models.
arXiv Detail & Related papers (2024-01-10T16:57:24Z) - Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem [1.1893676124374688]
We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
arXiv Detail & Related papers (2023-09-18T02:43:59Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Evaluating the Robustness of Deep Reinforcement Learning for Autonomous
Policies in a Multi-agent Urban Driving Environment [3.8073142980733]
We propose a benchmarking framework for the comparison of deep reinforcement learning in a vision-based autonomous driving.
We run the experiments in a vision-only high-fidelity urban driving simulated environments.
The results indicate that only some of the deep reinforcement learning algorithms perform consistently better across single and multi-agent scenarios.
arXiv Detail & Related papers (2021-12-22T15:14:50Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z) - Efficient Connected and Automated Driving System with Multi-agent Graph
Reinforcement Learning [22.369111982782634]
Connected and automated vehicles (CAVs) have attracted more and more attention recently.
We focus on how to improve the outcomes of the total transportation system by allowing each automated vehicle to learn cooperation with each other.
arXiv Detail & Related papers (2020-07-06T14:55:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.