CMP: Cooperative Motion Prediction with Multi-Agent Communication
- URL: http://arxiv.org/abs/2403.17916v2
- Date: Thu, 03 Oct 2024 17:59:25 GMT
- Title: CMP: Cooperative Motion Prediction with Multi-Agent Communication
- Authors: Zehao Wang, Yuping Wang, Zhuoyuan Wu, Hengbo Ma, Zhaowei Li, Hang Qiu, Jiachen Li,
- Abstract summary: This paper explores the feasibility and effectiveness of cooperative motion prediction.
Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities.
In particular, CMP reduces the average prediction error by 16.4% with fewer missing detections.
- Score: 21.60646440715162
- License:
- Abstract: The confluence of the advancement of Autonomous Vehicles (AVs) and the maturity of Vehicle-to-Everything (V2X) communication has enabled the capability of cooperative connected and automated vehicles (CAVs). Building on top of cooperative perception, this paper explores the feasibility and effectiveness of cooperative motion prediction. Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities. Unlike previous work that focuses separately on either cooperative perception or motion prediction, our framework, to the best of our knowledge, is the first to address the unified problem where CAVs share information in both perception and prediction modules. Incorporated into our design is the unique capability to tolerate realistic V2X bandwidth limitations and transmission delays, while dealing with bulky perception representations. We also propose a prediction aggregation module, which unifies the predictions obtained by different CAVs and generates the final prediction. Through extensive experiments and ablation studies on the OPV2V and V2V4Real datasets, we demonstrate the effectiveness of our method in cooperative perception, tracking, and motion prediction. In particular, CMP reduces the average prediction error by 16.4\% with fewer missing detections compared with the no cooperation setting and by 12.3\% compared with the strongest baseline. Our work marks a significant step forward in the cooperative capabilities of CAVs, showcasing enhanced performance in complex scenarios. The code can be found on the project website: https://cmp-cooperative-prediction.github.io/.
Related papers
- Conformal Trajectory Prediction with Multi-View Data Integration in Cooperative Driving [4.628774934971078]
Current research on trajectory prediction primarily relies on data collected by onboard sensors of an ego vehicle.
We introduce V2INet, a novel trajectory prediction framework designed to model multi-view data by extending existing single-view models.
Our results demonstrate superior performance in terms of Final Displacement Error (FDE) and Miss Rate (MR) using a single GPU.
arXiv Detail & Related papers (2024-08-01T08:32:03Z) - What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception [52.41695608928129]
Multi-agent perception (MAP) allows autonomous systems to understand complex environments by interpreting data from multiple sources.
This paper investigates intermediate collaboration for MAP with a specific focus on exploring "good" properties of collaborative view.
We propose a novel framework named CMiMC for intermediate collaboration.
arXiv Detail & Related papers (2024-03-15T07:18:55Z) - SmartCooper: Vehicular Collaborative Perception with Adaptive Fusion and
Judger Mechanism [23.824400533836535]
We introduce SmartCooper, an adaptive collaborative perception framework that incorporates communication optimization and a judger mechanism.
Our results demonstrate a substantial reduction in communication costs by 23.10% compared to the non-judger scheme.
arXiv Detail & Related papers (2024-02-01T04:15:39Z) - Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving [49.42873226593071]
We propose V2X communication INterruption-aware COoperative Perception (V2X-INCOP) for V2X communication-aided autonomous driving.
We use historical cooperation information to recover missing information due to the interruptions and alleviate the impact of the interruption issue.
Experiments on three public cooperative perception datasets demonstrate that the proposed method is effective in alleviating the impacts of communication interruption on cooperative perception.
arXiv Detail & Related papers (2023-04-24T04:59:13Z) - IPCC-TP: Utilizing Incremental Pearson Correlation Coefficient for Joint
Multi-Agent Trajectory Prediction [73.25645602768158]
IPCC-TP is a novel relevance-aware module based on Incremental Pearson Correlation Coefficient to improve multi-agent interaction modeling.
Our module can be conveniently embedded into existing multi-agent prediction methods to extend original motion distribution decoders.
arXiv Detail & Related papers (2023-03-01T15:16:56Z) - Adaptive Feature Fusion for Cooperative Perception using LiDAR Point
Clouds [0.0]
Cooperative perception allows a Connected Autonomous Vehicle to interact with the other CAVs in the vicinity.
It can compensate for the limitations of the conventional vehicular perception such as blind spots, low resolution, and weather effects.
We evaluate the performance of cooperative perception for both vehicle and pedestrian detection using the CODD dataset.
arXiv Detail & Related papers (2022-07-30T01:53:05Z) - Collaborative Uncertainty in Multi-Agent Trajectory Forecasting [35.013892666040846]
We propose a novel concept, collaborative uncertainty(CU), which models the uncertainty resulting from the interaction module.
We build a general CU-based framework to make a prediction model to learn the future trajectory and the corresponding uncertainty.
In each case, we conduct extensive experiments on two synthetic datasets and two public, large-scale benchmarks of trajectory forecasting.
arXiv Detail & Related papers (2021-10-26T18:27:22Z) - Online Multi-Agent Forecasting with Interpretable Collaborative Graph
Neural Network [65.11999700562869]
We propose a novel collaborative prediction unit (CoPU), which aggregates predictions from multiple collaborative predictors according to a collaborative graph.
Our methods outperform state-of-the-art works on the three tasks by 28.6%, 17.4% and 21.0% on average.
arXiv Detail & Related papers (2021-07-02T08:20:06Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.