Resolve Highway Conflict in Multi-Autonomous Vehicle Controls with Local State Attention
- URL: http://arxiv.org/abs/2506.11445v1
- Date: Fri, 13 Jun 2025 03:48:54 GMT
- Title: Resolve Highway Conflict in Multi-Autonomous Vehicle Controls with Local State Attention
- Authors: Xuan Duy Ta, Bang Giang Le, Thanh Ha Le, Viet Cuong Ta,
- Abstract summary: In mixed-traffic environments, autonomous vehicles must adapt to human-controlled vehicles and other unusual driving situations.<n>We propose a Local State Attention module to assist the input state representation.<n>Our approach is able to prioritize other vehicles' information to manage the merging process.
- Score: 1.1124588036301812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In mixed-traffic environments, autonomous vehicles must adapt to human-controlled vehicles and other unusual driving situations. This setting can be framed as a multi-agent reinforcement learning (MARL) environment with full cooperative reward among the autonomous vehicles. While methods such as Multi-agent Proximal Policy Optimization can be effective in training MARL tasks, they often fail to resolve local conflict between agents and are unable to generalize to stochastic events. In this paper, we propose a Local State Attention module to assist the input state representation. By relying on the self-attention operator, the module is expected to compress the essential information of nearby agents to resolve the conflict in traffic situations. Utilizing a simulated highway merging scenario with the priority vehicle as the unexpected event, our approach is able to prioritize other vehicles' information to manage the merging process. The results demonstrate significant improvements in merging efficiency compared to popular baselines, especially in high-density traffic settings.
Related papers
- Multi-residual Mixture of Experts Learning for Cooperative Control in Multi-vehicle Systems [5.5597941107270215]
We introduce Multi-Residual Mixture of Expert Learning (MRMEL) for Lagrangian traffic control.<n>MRMEL augments a suboptimal nominal AV control policy by learning a residual correction.<n>We validate MRMEL using a case study in cooperative eco-driving at signalized intersections in Atlanta, Dallas Fort Worth, and Salt Lake City.
arXiv Detail & Related papers (2025-07-14T00:17:12Z) - Confidence-Regulated Generative Diffusion Models for Reliable AI Agent Migration in Vehicular Metaverses [55.70043755630583]
vehicular AI agents are endowed with environment perception, decision-making, and action execution capabilities.<n>We propose a reliable vehicular AI agent migration framework, achieving reliable dynamic migration and efficient resource scheduling.<n>We develop a Confidence-regulated Generative Diffusion Model (CGDM) to efficiently generate AI agent migration decisions.
arXiv Detail & Related papers (2025-05-19T05:04:48Z) - SPformer: A Transformer Based DRL Decision Making Method for Connected Automated Vehicles [9.840325772591024]
We propose a CAV decision-making architecture based on transformer and reinforcement learning algorithms.
A learnable policy token is used as the learning medium of the multi-vehicle joint policy.
Our model can make good use of all the state information of vehicles in traffic scenario.
arXiv Detail & Related papers (2024-09-23T15:16:35Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - Safe Model-Based Multi-Agent Mean-Field Reinforcement Learning [48.667697255912614]
Mean-field reinforcement learning addresses the policy of a representative agent interacting with the infinite population of identical agents.
We propose Safe-M$3$-UCRL, the first model-based mean-field reinforcement learning algorithm that attains safe policies even in the case of unknown transitions.
Our algorithm effectively meets the demand in critical areas while ensuring service accessibility in regions with low demand.
arXiv Detail & Related papers (2023-06-29T15:57:07Z) - Towards Robust On-Ramp Merging via Augmented Multimodal Reinforcement
Learning [9.48157144651867]
We present a novel approach for Robust on-ramp merge of CAVs via Augmented and Multi-modal Reinforcement Learning.
Specifically, we formulate the on-ramp merging problem as a Markov decision process (MDP) by taking driving safety, comfort driving behavior, and traffic efficiency into account.
To provide reliable merging maneuvers, we simultaneously leverage BSM and surveillance images for multi-modal observation.
arXiv Detail & Related papers (2022-07-21T16:34:57Z) - Learning to Help Emergency Vehicles Arrive Faster: A Cooperative
Vehicle-Road Scheduling Approach [24.505687255063986]
Vehicle-centric scheduling approaches recommend optimal paths for emergency vehicles.
Road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection.
We propose LEVID, a cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module.
arXiv Detail & Related papers (2022-02-20T10:25:15Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Flatland Competition 2020: MAPF and MARL for Efficient Train
Coordination on a Grid World [49.80905654161763]
The Flatland competition aimed at finding novel approaches to solve the vehicle re-scheduling problem (VRSP)
The VRSP is concerned with scheduling trips in traffic networks and the re-scheduling of vehicles when disruptions occur.
The ever-growing complexity of modern railway networks makes dynamic real-time scheduling of traffic virtually impossible.
arXiv Detail & Related papers (2021-03-30T17:13:29Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.