Vehicle-road Cooperative Simulation and 3D Visualization System
- URL: http://arxiv.org/abs/2208.07304v1
- Date: Thu, 14 Jul 2022 04:53:54 GMT
- Title: Vehicle-road Cooperative Simulation and 3D Visualization System
- Authors: D. Wu
- Abstract summary: Vehicle-road collaboration technology can overcome the limits and improve the traffic safety and efficiency.
It requires rigorous testing and verification methods to ensure the reliability and trustworthiness of the technology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The safety of single-vehicle autonomous driving technology is limited due to
the limits of perception capability of on-board sensors. In contrast,
vehicle-road collaboration technology can overcome those limits and improve the
traffic safety and efficiency, by expanding the sensing range, improving the
perception accuracy, and reducing the response time. However, such a technology
is still under development; it requires rigorous testing and verification
methods to ensure the reliability and trustworthiness of the technology. In
this thesis, we focus on three major tasks: (1) analyze the functional
characteristics related to the scenarios of vehicle-road cooperations,
highlightening the differences between vehicle-road cooperative systems and
traditional single-vehicle autonomous driving systems; (2) refine and classifiy
the functional characteristics of vehicle-road cooperative systems; (3) design
and develop a simulation system, and provide a visual interface to facilitate
development and analysis. The efficiency and effectiveness the proposed method
are verfied by experiments.
Related papers
- CoMamba: Real-time Cooperative Perception Unlocked with State Space Models [39.87600356189242]
CoMamba is a novel cooperative 3D detection framework designed to leverage state-space models for real-time onboard vehicle perception.
CoMamba achieves superior performance compared to existing methods while maintaining real-time processing capabilities.
arXiv Detail & Related papers (2024-09-16T20:02:19Z) - Unified End-to-End V2X Cooperative Autonomous Driving [21.631099800753795]
UniE2EV2X is a V2X-integrated end-to-end autonomous driving system that consolidates key driving modules within a unified network.
The framework employs a deformable attention-based data fusion strategy, effectively facilitating cooperation between vehicles and infrastructure.
We implement the UniE2EV2X framework on the challenging DeepAccident, a simulation dataset designed for V2X cooperative driving.
arXiv Detail & Related papers (2024-05-07T03:01:40Z) - Collaborative Perception for Connected and Autonomous Driving:
Challenges, Possible Solutions and Opportunities [10.749959052350594]
Collaborative perception with connected and autonomous vehicles (CAVs) shows a promising solution to overcoming these limitations.
In this article, we first identify the challenges of collaborative perception, such as data sharing asynchrony, data volume, and pose errors.
We propose a scheme to deal with communication efficiency and latency problems, which is a channel-aware collaborative perception framework.
arXiv Detail & Related papers (2024-01-03T05:33:14Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Intelligent Perception System for Vehicle-Road Cooperation [0.0]
Vehicle-road cooperation autonomous driving technology can expand the vehicle's perception range, supplement the perception blind area and improve the perception accuracy.
This project mainly uses lidar to develop data fusion schemes to realize the sharing and combination of vehicle and road equipment data and achieve the detection and tracking of dynamic targets.
arXiv Detail & Related papers (2022-08-30T08:10:34Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z) - Improved YOLOv3 Object Classification in Intelligent Transportation
System [29.002873450422083]
An algorithm based on YOLOv3 is proposed to realize the detection and classification of vehicles, drivers, and people on the highway.
The model has a good performance and is robust to road blocking, different attitudes, and extreme lighting.
arXiv Detail & Related papers (2020-04-08T11:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.