NLOS Dies Twice: Challenges and Solutions of V2X for Cooperative
Perception
- URL: http://arxiv.org/abs/2307.06615v1
- Date: Thu, 13 Jul 2023 08:33:02 GMT
- Title: NLOS Dies Twice: Challenges and Solutions of V2X for Cooperative
Perception
- Authors: Lantao Li and Chen Sun
- Abstract summary: We introduce an abstract perception matrix matching method for quick sensor fusion matching procedures and mobility-height hybrid relay determination procedures.
To demonstrate the effectiveness of our solution, we design a new simulation framework to consider autonomous driving, sensor fusion and V2X communication in general.
- Score: 7.819255257787961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent multi-lidar sensor fusion between connected vehicles for
cooperative perception has recently been recognized as the best technique for
minimizing the blind zone of individual vehicular perception systems and
further enhancing the overall safety of autonomous driving systems. This
technique relies heavily on the reliability and availability of
vehicle-to-everything (V2X) communication. In practical sensor fusion
application scenarios, the non-line-of-sight (NLOS) issue causes blind zones
for not only the perception system but also V2X direct communication. To
counteract underlying communication issues, we introduce an abstract perception
matrix matching method for quick sensor fusion matching procedures and
mobility-height hybrid relay determination procedures, proactively improving
the efficiency and performance of V2X communication to serve the upper layer
application fusion requirements. To demonstrate the effectiveness of our
solution, we design a new simulation framework to consider autonomous driving,
sensor fusion and V2X communication in general, paving the way for end-to-end
performance evaluation and further solution derivation.
Related papers
- Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Hybrid-Generative Diffusion Models for Attack-Oriented Twin Migration in Vehicular Metaverses [58.264499654343226]
Vehicle Twins (VTs) are digital twins that provide immersive virtual services for Vehicular Metaverse Users (VMUs)
High mobility of vehicles, uneven deployment of edge servers, and potential security threats pose challenges to achieving efficient and reliable VT migrations.
We propose a secure and reliable VT migration framework in vehicular metaverses.
arXiv Detail & Related papers (2024-07-05T11:11:33Z) - Unified End-to-End V2X Cooperative Autonomous Driving [21.631099800753795]
UniE2EV2X is a V2X-integrated end-to-end autonomous driving system that consolidates key driving modules within a unified network.
The framework employs a deformable attention-based data fusion strategy, effectively facilitating cooperation between vehicles and infrastructure.
We implement the UniE2EV2X framework on the challenging DeepAccident, a simulation dataset designed for V2X cooperative driving.
arXiv Detail & Related papers (2024-05-07T03:01:40Z) - Enhancing Track Management Systems with Vehicle-To-Vehicle Enabled Sensor Fusion [0.0]
This paper proposes a novel Vehicle-to-Vehicle (V2V) enabled track management system.
The core innovation lies in the creation of independent priority track lists, consisting of fused detections validated through V2V communication.
The proposed system considers the implications of falsification of V2X signals which is combated through an initial vehicle identification process using detection from perception sensors.
arXiv Detail & Related papers (2024-04-26T20:54:44Z) - Enhanced Cooperative Perception for Autonomous Vehicles Using Imperfect Communication [0.24466725954625887]
We propose a novel approach to realize an optimized Cooperative Perception (CP) under constrained communications.
At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range.
Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception.
arXiv Detail & Related papers (2024-04-10T15:37:15Z) - V2X-Lead: LiDAR-based End-to-End Autonomous Driving with
Vehicle-to-Everything Communication Integration [4.166623313248682]
This paper presents a LiDAR-based end-to-end autonomous driving method with Vehicle-to-Everything (V2X) communication integration.
The proposed method aims to handle imperfect partial observations by fusing the onboard LiDAR sensor and V2X communication data.
arXiv Detail & Related papers (2023-09-26T20:26:03Z) - Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving [49.42873226593071]
We propose V2X communication INterruption-aware COoperative Perception (V2X-INCOP) for V2X communication-aided autonomous driving.
We use historical cooperation information to recover missing information due to the interruptions and alleviate the impact of the interruption issue.
Experiments on three public cooperative perception datasets demonstrate that the proposed method is effective in alleviating the impacts of communication interruption on cooperative perception.
arXiv Detail & Related papers (2023-04-24T04:59:13Z) - CoPEM: Cooperative Perception Error Models for Autonomous Driving [20.60246432605745]
We focus on the (onboard) perception of Autonomous Vehicles (AV), which can manifest as misdetection errors on the occluded objects.
We introduce the notion of Cooperative Perception Error Models (coPEMs) towards achieving an effective integration of V2X solutions within a virtual test environment.
arXiv Detail & Related papers (2022-11-21T04:40:27Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision
Transformer [58.71845618090022]
We build a holistic attention model, namely V2X-ViT, to fuse information across on-road agents.
V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention.
To validate our approach, we create a large-scale V2X perception dataset.
arXiv Detail & Related papers (2022-03-20T20:18:25Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.