EI-Drive: A Platform for Cooperative Perception with Realistic Communication Models
- URL: http://arxiv.org/abs/2412.09782v1
- Date: Fri, 13 Dec 2024 01:37:44 GMT
- Title: EI-Drive: A Platform for Cooperative Perception with Realistic Communication Models
- Authors: Hanchu Zhou, Edward Xie, Wei Shao, Dechen Gao, Michelle Dong, Junshan Zhang,
- Abstract summary: EI-Drive is an edge-AI based autonomous driving simulation platform.
It integrates advanced cooperative perception with more realistic communication models.
Experiments using EI-Drive demonstrate significant improvements in vehicle safety and performance.
- Score: 16.023748778830562
- License:
- Abstract: The growing interest in autonomous driving calls for realistic simulation platforms capable of accurately simulating cooperative perception process in realistic traffic scenarios. Existing studies for cooperative perception often have not accounted for transmission latency and errors in real-world environments. To address this gap, we introduce EI-Drive, an edge-AI based autonomous driving simulation platform that integrates advanced cooperative perception with more realistic communication models. Built on the CARLA framework, EI-Drive features new modules for cooperative perception while taking into account transmission latency and errors, providing a more realistic platform for evaluating cooperative perception algorithms. In particular, the platform enables vehicles to fuse data from multiple sources, improving situational awareness and safety in complex environments. With its modular design, EI-Drive allows for detailed exploration of sensing, perception, planning, and control in various cooperative driving scenarios. Experiments using EI-Drive demonstrate significant improvements in vehicle safety and performance, particularly in scenarios with complex traffic flow and network conditions. All code and documents are accessible on our GitHub page: \url{https://ucd-dare.github.io/eidrive.github.io/}.
Related papers
- Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene [56.73568220959019]
Collaborative autonomous driving (CAV) seems like a promising direction, but collecting data for development is non-trivial.
We introduce a novel surrogate to the rescue, which is to generate realistic perception from different viewpoints in a driving scene.
We present the very first solution, using a combination of simulated collaborative data and real ego-car data.
arXiv Detail & Related papers (2025-02-10T17:07:53Z) - WHALES: A Multi-agent Scheduling Dataset for Enhanced Cooperation in Autonomous Driving [54.365702251769456]
We present dataset with unprecedented average of 8.4 agents per driving sequence.
In addition to providing the largest number of agents and viewpoints among autonomous driving datasets, WHALES records agent behaviors.
We conduct experiments on agent scheduling task, where the ego agent selects one of multiple candidate agents to cooperate with.
arXiv Detail & Related papers (2024-11-20T14:12:34Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Learning Driver Models for Automated Vehicles via Knowledge Sharing and
Personalization [2.07180164747172]
This paper describes a framework for learning Automated Vehicles (AVs) driver models via knowledge sharing between vehicles and personalization.
It finds several applications across transportation engineering including intelligent transportation systems, traffic management, and vehicle-to-vehicle communication.
arXiv Detail & Related papers (2023-08-31T17:18:15Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - DeepIPC: Deeply Integrated Perception and Control for an Autonomous Vehicle in Real Environments [7.642646077340124]
We introduce DeepIPC, a novel end-to-end model tailored for autonomous driving.
DeepIPC seamlessly integrates perception and control tasks.
Our evaluation demonstrates DeepIPC's superior performance in terms of drivability and multi-task efficiency.
arXiv Detail & Related papers (2022-07-20T14:20:35Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Cyber Mobility Mirror for Enabling Cooperative Driving Automation: A
Co-Simulation Platform [16.542137414609606]
Co-simulation platform can simulate both the real world with a high-fidelity sensor perception system and the cyber world with a real-time 3D reconstruction system.
Mirror-world simulator is responsible for reconstructing 3D objects and their trajectories from the perceived information.
Roadside LiDAR-based real-time vehicle detection and 3D reconstruction system is prototyped as a study case.
arXiv Detail & Related papers (2022-01-24T05:27:20Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.