V2X-Lead: LiDAR-based End-to-End Autonomous Driving with
Vehicle-to-Everything Communication Integration
- URL: http://arxiv.org/abs/2309.15252v1
- Date: Tue, 26 Sep 2023 20:26:03 GMT
- Title: V2X-Lead: LiDAR-based End-to-End Autonomous Driving with
Vehicle-to-Everything Communication Integration
- Authors: Zhiyun Deng, Yanjun Shi, Weiming Shen
- Abstract summary: This paper presents a LiDAR-based end-to-end autonomous driving method with Vehicle-to-Everything (V2X) communication integration.
The proposed method aims to handle imperfect partial observations by fusing the onboard LiDAR sensor and V2X communication data.
- Score: 4.166623313248682
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a LiDAR-based end-to-end autonomous driving method with
Vehicle-to-Everything (V2X) communication integration, termed V2X-Lead, to
address the challenges of navigating unregulated urban scenarios under
mixed-autonomy traffic conditions. The proposed method aims to handle imperfect
partial observations by fusing the onboard LiDAR sensor and V2X communication
data. A model-free and off-policy deep reinforcement learning (DRL) algorithm
is employed to train the driving agent, which incorporates a carefully designed
reward function and multi-task learning technique to enhance generalization
across diverse driving tasks and scenarios. Experimental results demonstrate
the effectiveness of the proposed approach in improving safety and efficiency
in the task of traversing unsignalized intersections in mixed-autonomy traffic,
and its generalizability to previously unseen scenarios, such as roundabouts.
The integration of V2X communication offers a significant data source for
autonomous vehicles (AVs) to perceive their surroundings beyond onboard
sensors, resulting in a more accurate and comprehensive perception of the
driving environment and more safe and robust driving behavior.
Related papers
- V2X-VLM: End-to-End V2X Cooperative Autonomous Driving Through Large Vision-Language Models [13.716889927164383]
This paper introduces V2X-VLM, an innovative E2E vehicle-infrastructure cooperative autonomous driving (VICAD) framework with Vehicle-to-Everything (V2X) systems and large vision-language models (VLMs)
V2X-VLM is designed to enhance situational awareness, decision-making, and ultimate trajectory planning by integrating multimodel data from vehicle-mounted cameras, infrastructure sensors, and textual information.
Evaluations on the DAIR-V2X dataset show that V2X-VLM outperforms state-of-the-art cooperative autonomous driving methods.
arXiv Detail & Related papers (2024-08-17T16:42:13Z) - Unified End-to-End V2X Cooperative Autonomous Driving [21.631099800753795]
UniE2EV2X is a V2X-integrated end-to-end autonomous driving system that consolidates key driving modules within a unified network.
The framework employs a deformable attention-based data fusion strategy, effectively facilitating cooperation between vehicles and infrastructure.
We implement the UniE2EV2X framework on the challenging DeepAccident, a simulation dataset designed for V2X cooperative driving.
arXiv Detail & Related papers (2024-05-07T03:01:40Z) - Towards Collaborative Autonomous Driving: Simulation Platform and End-to-End System [35.447617290190294]
Vehicle-to-everything-aided autonomous driving (V2X-AD) has a huge potential to provide a safer driving solution.
We present V2Xverse, a comprehensive simulation platform for collaborative autonomous driving.
We introduce CoDriving, a novel end-to-end collaborative driving system.
arXiv Detail & Related papers (2024-04-15T06:33:32Z) - DriveCoT: Integrating Chain-of-Thought Reasoning with End-to-End Driving [81.04174379726251]
This paper collects a comprehensive end-to-end driving dataset named DriveCoT.
It contains sensor data, control decisions, and chain-of-thought labels to indicate the reasoning process.
We propose a baseline model called DriveCoT-Agent, trained on our dataset, to generate chain-of-thought predictions and final decisions.
arXiv Detail & Related papers (2024-03-25T17:59:01Z) - NLOS Dies Twice: Challenges and Solutions of V2X for Cooperative
Perception [7.819255257787961]
We introduce an abstract perception matrix matching method for quick sensor fusion matching procedures and mobility-height hybrid relay determination procedures.
To demonstrate the effectiveness of our solution, we design a new simulation framework to consider autonomous driving, sensor fusion and V2X communication in general.
arXiv Detail & Related papers (2023-07-13T08:33:02Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.