Learning for Vehicle-to-Vehicle Cooperative Perception under Lossy
Communication
- URL: http://arxiv.org/abs/2212.08273v2
- Date: Sat, 18 Mar 2023 21:38:20 GMT
- Title: Learning for Vehicle-to-Vehicle Cooperative Perception under Lossy
Communication
- Authors: Jinlong Li, Runsheng Xu, Xinyu Liu, Jin Ma, Zicheng Chi, Jiaqi Ma,
Hongkai Yu
- Abstract summary: We study the side effect (e.g., detection performance drop) by the lossy communication in the V2V Cooperative Perception.
We propose a novel intermediate LC-aware feature fusion method to relieve the side effect of lossy communication.
The proposed method is quite effective for the cooperative point cloud based 3D object detection under lossy V2V communication.
- Score: 30.100647849646467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has been widely used in the perception (e.g., 3D object
detection) of intelligent vehicle driving. Due to the beneficial
Vehicle-to-Vehicle (V2V) communication, the deep learning based features from
other agents can be shared to the ego vehicle so as to improve the perception
of the ego vehicle. It is named as Cooperative Perception in the V2V research,
whose algorithms have been dramatically advanced recently. However, all the
existing cooperative perception algorithms assume the ideal V2V communication
without considering the possible lossy shared features because of the Lossy
Communication (LC) which is common in the complex real-world driving scenarios.
In this paper, we first study the side effect (e.g., detection performance
drop) by the lossy communication in the V2V Cooperative Perception, and then we
propose a novel intermediate LC-aware feature fusion method to relieve the side
effect of lossy communication by a LC-aware Repair Network (LCRN) and enhance
the interaction between the ego vehicle and other vehicles by a specially
designed V2V Attention Module (V2VAM) including intra-vehicle attention of ego
vehicle and uncertainty-aware inter-vehicle attention. The extensive experiment
on the public cooperative perception dataset OPV2V (based on digital-twin CARLA
simulator) demonstrates that the proposed method is quite effective for the
cooperative point cloud based 3D object detection under lossy V2V
communication.
Related papers
- EAIA: An Efficient and Anonymous Identity Authentication Scheme in 5G-V2V [14.315350766867814]
This paper proposes an efficient anonymous V2V identity authentication protocol tailored for scenarios that lack Roadside Units (RSUs) support.
The proposed protocol has been formally assessed using the Scyther tool, demonstrating its capability to withstand major typical malicious attacks.
arXiv Detail & Related papers (2024-06-07T07:26:09Z) - Enhanced Cooperative Perception for Autonomous Vehicles Using Imperfect Communication [0.24466725954625887]
We propose a novel approach to realize an optimized Cooperative Perception (CP) under constrained communications.
At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range.
Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception.
arXiv Detail & Related papers (2024-04-10T15:37:15Z) - Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving [49.42873226593071]
We propose V2X communication INterruption-aware COoperative Perception (V2X-INCOP) for V2X communication-aided autonomous driving.
We use historical cooperation information to recover missing information due to the interruptions and alleviate the impact of the interruption issue.
Experiments on three public cooperative perception datasets demonstrate that the proposed method is effective in alleviating the impacts of communication interruption on cooperative perception.
arXiv Detail & Related papers (2023-04-24T04:59:13Z) - HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with
vision transformer [4.957079586254435]
HM-ViT is the first unified multi-agent hetero-modal cooperative perception framework.
It can collaboratively predict 3D objects for highly dynamic vehicle-to-vehicle (V2V) collaborations with varying numbers and types of agents.
arXiv Detail & Related papers (2023-04-20T20:09:59Z) - V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle
Cooperative Perception [49.7212681947463]
Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry.
We present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception.
Our dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps.
arXiv Detail & Related papers (2023-03-14T02:49:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision
Transformer [58.71845618090022]
We build a holistic attention model, namely V2X-ViT, to fuse information across on-road agents.
V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention.
To validate our approach, we create a large-scale V2X perception dataset.
arXiv Detail & Related papers (2022-03-20T20:18:25Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction [74.42961817119283]
We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
arXiv Detail & Related papers (2020-08-17T17:58:26Z) - Cooperative Perception with Deep Reinforcement Learning for Connected
Vehicles [7.7003495898919265]
We present a cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects.
Our scheme mitigates the network load in vehicular communication networks and enhances the communication reliability.
arXiv Detail & Related papers (2020-04-23T01:44:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.