Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving
- URL: http://arxiv.org/abs/2304.11821v2
- Date: Wed, 28 Feb 2024 05:01:31 GMT
- Title: Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving
- Authors: Shunli Ren, Zixing Lei, Zi Wang, Mehrdad Dianati, Yafei Wang, Siheng
Chen, Wenjun Zhang
- Abstract summary: We propose V2X communication INterruption-aware COoperative Perception (V2X-INCOP) for V2X communication-aided autonomous driving.
We use historical cooperation information to recover missing information due to the interruptions and alleviate the impact of the interruption issue.
Experiments on three public cooperative perception datasets demonstrate that the proposed method is effective in alleviating the impacts of communication interruption on cooperative perception.
- Score: 49.42873226593071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative perception can significantly improve the perception performance
of autonomous vehicles beyond the limited perception ability of individual
vehicles by exchanging information with neighbor agents through V2X
communication. However, most existing work assume ideal communication among
agents, ignoring the significant and common \textit{interruption issues} caused
by imperfect V2X communication, where cooperation agents can not receive
cooperative messages successfully and thus fail to achieve cooperative
perception, leading to safety risks. To fully reap the benefits of cooperative
perception in practice, we propose V2X communication INterruption-aware
COoperative Perception (V2X-INCOP), a cooperative perception system robust to
communication interruption for V2X communication-aided autonomous driving,
which leverages historical cooperation information to recover missing
information due to the interruptions and alleviate the impact of the
interruption issue. To achieve comprehensive recovery, we design a
communication-adaptive multi-scale spatial-temporal prediction model to extract
multi-scale spatial-temporal features based on V2X communication conditions and
capture the most significant information for the prediction of the missing
information. To further improve recovery performance, we adopt a knowledge
distillation framework to give explicit and direct supervision to the
prediction model and a curriculum learning strategy to stabilize the training
of the model. Experiments on three public cooperative perception datasets
demonstrate that the proposed method is effective in alleviating the impacts of
communication interruption on cooperative perception.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - CMP: Cooperative Motion Prediction with Multi-Agent Communication [21.60646440715162]
This paper explores the feasibility and effectiveness of cooperative motion prediction.
Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities.
In particular, CMP reduces the average prediction error by 16.4% with fewer missing detections.
arXiv Detail & Related papers (2024-03-26T17:53:27Z) - SmartCooper: Vehicular Collaborative Perception with Adaptive Fusion and
Judger Mechanism [23.824400533836535]
We introduce SmartCooper, an adaptive collaborative perception framework that incorporates communication optimization and a judger mechanism.
Our results demonstrate a substantial reduction in communication costs by 23.10% compared to the non-judger scheme.
arXiv Detail & Related papers (2024-02-01T04:15:39Z) - Practical Collaborative Perception: A Framework for Asynchronous and
Multi-Agent 3D Object Detection [9.967263440745432]
Occlusion is a major challenge for LiDAR-based object detection methods.
State-of-the-art V2X methods resolve the performance-bandwidth tradeoff using a mid-collaboration approach.
We devise a simple yet effective collaboration method that achieves a better bandwidth-performance tradeoff than prior methods.
arXiv Detail & Related papers (2023-07-04T03:49:42Z) - Learning for Vehicle-to-Vehicle Cooperative Perception under Lossy
Communication [30.100647849646467]
We study the side effect (e.g., detection performance drop) by the lossy communication in the V2V Cooperative Perception.
We propose a novel intermediate LC-aware feature fusion method to relieve the side effect of lossy communication.
The proposed method is quite effective for the cooperative point cloud based 3D object detection under lossy V2V communication.
arXiv Detail & Related papers (2022-12-16T04:18:47Z) - Over-communicate no more: Situated RL agents learn concise communication
protocols [78.28898217947467]
It is unclear how to design artificial agents that can learn to effectively and efficiently communicate with each other.
Much research on communication emergence uses reinforcement learning (RL)
We explore situated communication in a multi-step task, where the acting agent has to forgo an environmental action to communicate.
We find that while all tested pressures can disincentivise over-communication, situated communication does it most effectively and, unlike the cost on effort, does not negatively impact emergence.
arXiv Detail & Related papers (2022-11-02T21:08:14Z) - V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision
Transformer [58.71845618090022]
We build a holistic attention model, namely V2X-ViT, to fuse information across on-road agents.
V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention.
To validate our approach, we create a large-scale V2X perception dataset.
arXiv Detail & Related papers (2022-03-20T20:18:25Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.