CoPEM: Cooperative Perception Error Models for Autonomous Driving
- URL: http://arxiv.org/abs/2211.11175v2
- Date: Tue, 22 Nov 2022 03:22:19 GMT
- Title: CoPEM: Cooperative Perception Error Models for Autonomous Driving
- Authors: Andrea Piazzoni, Jim Cherian, Roshan Vijay, Lap-Pui Chau, Justin
Dauwels
- Abstract summary: We focus on the (onboard) perception of Autonomous Vehicles (AV), which can manifest as misdetection errors on the occluded objects.
We introduce the notion of Cooperative Perception Error Models (coPEMs) towards achieving an effective integration of V2X solutions within a virtual test environment.
- Score: 20.60246432605745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce the notion of Cooperative Perception Error Models
(coPEMs) towards achieving an effective and efficient integration of V2X
solutions within a virtual test environment. We focus our analysis on the
occlusion problem in the (onboard) perception of Autonomous Vehicles (AV),
which can manifest as misdetection errors on the occluded objects. Cooperative
perception (CP) solutions based on Vehicle-to-Everything (V2X) communications
aim to avoid such issues by cooperatively leveraging additional points of view
for the world around the AV. This approach usually requires many sensors,
mainly cameras and LiDARs, to be deployed simultaneously in the environment
either as part of the road infrastructure or on other traffic vehicles.
However, implementing a large number of sensor models in a virtual simulation
pipeline is often prohibitively computationally expensive. Therefore, in this
paper, we rely on extending Perception Error Models (PEMs) to efficiently
implement such cooperative perception solutions along with the errors and
uncertainties associated with them. We demonstrate the approach by comparing
the safety achievable by an AV challenged with a traffic scenario where
occlusion is the primary cause of a potential collision.
Related papers
- V2X-Assisted Distributed Computing and Control Framework for Connected and Automated Vehicles under Ramp Merging Scenario [36.19449852204522]
This paper investigates distributed computing control of connected and automated vehicles (CAVs) in ramp scenario under cyber-physical system.
Unlike existing method, our method can distribute the computational task among CAVs and carry out parallel computation through V2X communication.
arXiv Detail & Related papers (2024-10-30T12:56:49Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - SiCP: Simultaneous Individual and Cooperative Perception for 3D Object Detection in Connected and Automated Vehicles [18.23919432049492]
Cooperative perception for connected and automated vehicles is traditionally achieved through the fusion of feature maps from two or more vehicles.
This drawback impedes the adoption of cooperative perception as vehicle resources are often insufficient to concurrently employ two perception models.
We present Simultaneous Individual and Cooperative Perception (SiCP), a generic framework that supports a wide range of the state-of-the-art standalone perception backbones.
arXiv Detail & Related papers (2023-12-08T04:12:26Z) - Towards Full-scene Domain Generalization in Multi-agent Collaborative
Bird's Eye View Segmentation for Connected and Autonomous Driving [54.60458503590669]
We propose a unified domain generalization framework applicable in both training and inference stages of collaborative perception.
We employ an Amplitude Augmentation (AmpAug) method to augment low-frequency image variations, broadening the model's ability to learn.
In the inference phase, we introduce an intra-system domain alignment mechanism to reduce or potentially eliminate the domain discrepancy.
arXiv Detail & Related papers (2023-11-28T12:52:49Z) - NLOS Dies Twice: Challenges and Solutions of V2X for Cooperative
Perception [7.819255257787961]
We introduce an abstract perception matrix matching method for quick sensor fusion matching procedures and mobility-height hybrid relay determination procedures.
To demonstrate the effectiveness of our solution, we design a new simulation framework to consider autonomous driving, sensor fusion and V2X communication in general.
arXiv Detail & Related papers (2023-07-13T08:33:02Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Practical Collaborative Perception: A Framework for Asynchronous and
Multi-Agent 3D Object Detection [9.967263440745432]
Occlusion is a major challenge for LiDAR-based object detection methods.
State-of-the-art V2X methods resolve the performance-bandwidth tradeoff using a mid-collaboration approach.
We devise a simple yet effective collaboration method that achieves a better bandwidth-performance tradeoff than prior methods.
arXiv Detail & Related papers (2023-07-04T03:49:42Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.