Edge-Cloud Collaborative Motion Planning for Autonomous Driving with Large Language Models
- URL: http://arxiv.org/abs/2408.09972v1
- Date: Mon, 19 Aug 2024 13:19:15 GMT
- Title: Edge-Cloud Collaborative Motion Planning for Autonomous Driving with Large Language Models
- Authors: Jiao Chen, Suyan Dai, Fangfang Chen, Zuohong Lv, Jianhua Tang,
- Abstract summary: EC-Drive is a novel edge-cloud collaborative autonomous driving system with data drift detection capabilities.
This study introduces EC-Drive, a novel edge-cloud collaborative autonomous driving system with data drift detection capabilities.
- Score: 3.6503689363051364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating large language models (LLMs) into autonomous driving enhances personalization and adaptability in open-world scenarios. However, traditional edge computing models still face significant challenges in processing complex driving data, particularly regarding real-time performance and system efficiency. To address these challenges, this study introduces EC-Drive, a novel edge-cloud collaborative autonomous driving system with data drift detection capabilities. EC-Drive utilizes drift detection algorithms to selectively upload critical data, including new obstacles and traffic pattern changes, to the cloud for processing by GPT-4, while routine data is efficiently managed by smaller LLMs on edge devices. This approach not only reduces inference latency but also improves system efficiency by optimizing communication resource use. Experimental validation confirms the system's robust processing capabilities and practical applicability in real-world driving conditions, demonstrating the effectiveness of this edge-cloud collaboration framework. Our data and system demonstration will be released at https://sites.google.com/view/ec-drive.
Related papers
- Efficient Driving Behavior Narration and Reasoning on Edge Device Using Large Language Models [16.532357621144342]
Large language models (LLMs) can describe driving scenes and behaviors with a level of accuracy similar to human perception.
We propose a driving behavior narration and reasoning framework that applies LLMs to edge devices.
Our experiments show that LLMs deployed on edge devices can achieve satisfactory response speeds.
arXiv Detail & Related papers (2024-09-30T15:03:55Z) - DiFSD: Ego-Centric Fully Sparse Paradigm with Uncertainty Denoising and Iterative Refinement for Efficient End-to-End Autonomous Driving [55.53171248839489]
We propose an ego-centric fully sparse paradigm, named DiFSD, for end-to-end self-driving.
Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.
Experiments conducted on nuScenes dataset demonstrate the superior planning performance and great efficiency of DiFSD.
arXiv Detail & Related papers (2024-09-15T15:55:24Z) - DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model [84.29836263441136]
This study introduces DriveGPT4, a novel interpretable end-to-end autonomous driving system based on multimodal large language models (MLLMs)
DriveGPT4 facilitates the interpretation of vehicle actions, offers pertinent reasoning, and effectively addresses a diverse range of questions posed by users.
Evaluations conducted on the BDD-X dataset showcase the superior qualitative and quantitative performance of DriveGPT4.
arXiv Detail & Related papers (2023-10-02T17:59:52Z) - Penalty-Based Imitation Learning With Cross Semantics Generation Sensor
Fusion for Autonomous Driving [1.2749527861829049]
In this paper, we provide a penalty-based imitation learning approach to integrate multiple modalities of information.
We observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser.
Our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%.
arXiv Detail & Related papers (2023-03-21T14:29:52Z) - Generative AI-empowered Simulation for Autonomous Driving in Vehicular
Mixed Reality Metaverses [130.15554653948897]
In vehicular mixed reality (MR) Metaverse, distance between physical and virtual entities can be overcome.
Large-scale traffic and driving simulation via realistic data collection and fusion from the physical world is difficult and costly.
We propose an autonomous driving architecture, where generative AI is leveraged to synthesize unlimited conditioned traffic and driving data in simulations.
arXiv Detail & Related papers (2023-02-16T16:54:10Z) - YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception [1.6683976936678229]
Multi-tasking learning approaches have achieved promising results in solving panoptic driving perception problems.
This paper proposed an effective and efficient multi-task learning network to simultaneously perform the task of traffic object detection, drivable road area segmentation and lane detection.
Our model achieved the new state-of-the-art (SOTA) performance in terms of accuracy and speed on the challenging BDD100K dataset.
arXiv Detail & Related papers (2022-08-24T11:00:27Z) - Edge YOLO: Real-Time Intelligent Object Detection System Based on
Edge-Cloud Cooperation in Autonomous Vehicles [5.295478084029605]
We propose an object detection (OD) system based on edge-cloud cooperation and reconstructive convolutional neural networks.
This system can effectively avoid the excessive dependence on computing power and uneven distribution of cloud computing resources.
We experimentally demonstrate the reliability and efficiency of Edge YOLO on COCO 2017 and KITTI data sets.
arXiv Detail & Related papers (2022-05-30T09:16:35Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - An Efficient Deep Learning Approach Using Improved Generative
Adversarial Networks for Incomplete Information Completion of Self-driving [2.8504921333436832]
We propose an efficient deep learning approach to repair incomplete vehicle point cloud accurately and efficiently in autonomous driving.
The improved PF-Net can achieve the speedups of over 19x with almost the same accuracy when compared to the original PF-Net.
arXiv Detail & Related papers (2021-09-01T08:06:23Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.