Autonomous Vehicle Patrolling Through Deep Reinforcement Learning:
Learning to Communicate and Cooperate
- URL: http://arxiv.org/abs/2402.10222v1
- Date: Sun, 28 Jan 2024 14:29:30 GMT
- Title: Autonomous Vehicle Patrolling Through Deep Reinforcement Learning:
Learning to Communicate and Cooperate
- Authors: Chenhao Tong, Maria A. Rodriguez, Richard O. Sinnott
- Abstract summary: Finding an optimal patrolling strategy can be challenging due to unknown environmental factors, such as wind or landscape.
Agents are trained to develop their own communication protocol to cooperate during patrolling where faults can and do occur.
The solution is validated through simulation experiments and is compared with several state-of-the-art patrolling solutions from different perspectives.
- Score: 3.79830302036482
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous vehicles are suited for continuous area patrolling problems.
Finding an optimal patrolling strategy can be challenging due to unknown
environmental factors, such as wind or landscape; or autonomous vehicles'
constraints, such as limited battery life or hardware failures. Importantly,
patrolling large areas often requires multiple agents to collectively
coordinate their actions. However, an optimal coordination strategy is often
non-trivial to be manually defined due to the complex nature of patrolling
environments. In this paper, we consider a patrolling problem with
environmental factors, agent limitations, and three typical cooperation
problems -- collision avoidance, congestion avoidance, and patrolling target
negotiation. We propose a multi-agent reinforcement learning solution based on
a reinforced inter-agent learning (RIAL) method. With this approach, agents are
trained to develop their own communication protocol to cooperate during
patrolling where faults can and do occur. The solution is validated through
simulation experiments and is compared with several state-of-the-art patrolling
solutions from different perspectives, including the overall patrol
performance, the collision avoidance performance, the efficiency of battery
recharging strategies, and the overall fault tolerance.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Multi-Agent Reinforcement Learning for Joint Police Patrol and Dispatch [13.336551874123796]
We propose a novel method for jointly optimizing multi-agent patrol and dispatch to learn policies yielding rapid response times.
Our method treats each patroller as an independent Q-learner (agent) with a shared deep Q-network that represents the state-action values.
We demonstrate that this heterogeneous multi-agent reinforcement learning approach is capable of learning policies that optimize for patrol or dispatch alone.
arXiv Detail & Related papers (2024-09-03T19:19:57Z) - RACER: Epistemic Risk-Sensitive RL Enables Fast Driving with Fewer Crashes [57.319845580050924]
We propose a reinforcement learning framework that combines risk-sensitive control with an adaptive action space curriculum.
We show that our algorithm is capable of learning high-speed policies for a real-world off-road driving task.
arXiv Detail & Related papers (2024-05-07T23:32:36Z) - Multi-Agent Deep Reinforcement Learning for Cooperative and Competitive
Autonomous Vehicles using AutoDRIVE Ecosystem [1.1893676124374688]
We introduce AutoDRIVE Ecosystem as an enabler to develop physically accurate and graphically realistic digital twins of Nigel and F1TENTH.
We first investigate an intersection problem using a set of cooperative vehicles (Nigel) that share limited state information with each other in single as well as multi-agent learning settings.
We then investigate an adversarial head-to-head autonomous racing problem using a different set of vehicles (F1TENTH) in a multi-agent learning setting using an individual policy approach.
arXiv Detail & Related papers (2023-09-18T02:43:59Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - An Energy-aware and Fault-tolerant Deep Reinforcement Learning based
approach for Multi-agent Patrolling Problems [0.5008597638379226]
We propose an approach based on model-free, deep multi-agent reinforcement learning.
Agents are trained to patrol an environment with various unknown dynamics and factors.
They can automatically recharge themselves to support continuous collective patrolling.
This architecture provides a patrolling system that can tolerate agent failures and allow supplementary agents to be added to replace failed agents or to increase the overall patrol performance.
arXiv Detail & Related papers (2022-12-16T01:38:35Z) - Real-time Cooperative Vehicle Coordination at Unsignalized Road
Intersections [7.860567520771493]
Cooperative coordination at unsignalized road intersections aims to improve the safety driving traffic throughput for connected and automated vehicles.
We introduce a model-free Markov Decision Process (MDP) and tackle it by a Twin Delayed Deep Deterministic Policy (TD3)-based strategy in the deep reinforcement learning framework.
We show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve control in the realistic continuous flow.
arXiv Detail & Related papers (2022-05-03T02:56:02Z) - Coach-assisted Multi-Agent Reinforcement Learning Framework for
Unexpected Crashed Agents [120.91291581594773]
We present a formal formulation of a cooperative multi-agent reinforcement learning system with unexpected crashes.
We propose a coach-assisted multi-agent reinforcement learning framework, which introduces a virtual coach agent to adjust the crash rate during training.
To the best of our knowledge, this work is the first to study the unexpected crashes in the multi-agent system.
arXiv Detail & Related papers (2022-03-16T08:22:45Z) - Supervised Permutation Invariant Networks for Solving the CVRP with
Bounded Fleet Size [3.5235974685889397]
Learning to solve optimization problems, such as the vehicle routing problem, offers great computational advantages.
We propose a powerful supervised deep learning framework that constructs a complete tour plan from scratch while respecting an apriori fixed number of vehicles.
In combination with an efficient post-processing scheme, our supervised approach is not only much faster and easier to train but also competitive results.
arXiv Detail & Related papers (2022-01-05T10:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.