SmartCooper: Vehicular Collaborative Perception with Adaptive Fusion and
Judger Mechanism
- URL: http://arxiv.org/abs/2402.00321v3
- Date: Mon, 4 Mar 2024 05:37:29 GMT
- Title: SmartCooper: Vehicular Collaborative Perception with Adaptive Fusion and
Judger Mechanism
- Authors: Yuang Zhang, Haonan An, Zhengru Fang, Guowen Xu, Yuan Zhou, Xianhao
Chen and Yuguang Fang
- Abstract summary: We introduce SmartCooper, an adaptive collaborative perception framework that incorporates communication optimization and a judger mechanism.
Our results demonstrate a substantial reduction in communication costs by 23.10% compared to the non-judger scheme.
- Score: 23.824400533836535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, autonomous driving has garnered significant attention due to
its potential for improving road safety through collaborative perception among
connected and autonomous vehicles (CAVs). However, time-varying channel
variations in vehicular transmission environments demand dynamic allocation of
communication resources. Moreover, in the context of collaborative perception,
it is important to recognize that not all CAVs contribute valuable data, and
some CAV data even have detrimental effects on collaborative perception. In
this paper, we introduce SmartCooper, an adaptive collaborative perception
framework that incorporates communication optimization and a judger mechanism
to facilitate CAV data fusion. Our approach begins with optimizing the
connectivity of vehicles while considering communication constraints. We then
train a learnable encoder to dynamically adjust the compression ratio based on
the channel state information (CSI). Subsequently, we devise a judger mechanism
to filter the detrimental image data reconstructed by adaptive decoders. We
evaluate the effectiveness of our proposed algorithm on the OpenCOOD platform.
Our results demonstrate a substantial reduction in communication costs by
23.10\% compared to the non-judger scheme. Additionally, we achieve a
significant improvement on the average precision of Intersection over Union
(AP@IoU) by 7.15\% compared with state-of-the-art schemes.
Related papers
- Channel-Aware Throughput Maximization for Cooperative Data Fusion in CAV [17.703608985129026]
Connected and autonomous vehicles (CAVs) have garnered significant attention due to their extended perception range and enhanced sensing coverage.
To address challenges such as blind spots and obstructions, CAVs employ vehicle-to-vehicle communications to aggregate data from surrounding vehicles.
We propose a channel-aware throughput approach to facilitate CAV data fusion, leveraging a self-supervised autoencoder for adaptive data compression.
arXiv Detail & Related papers (2024-10-06T00:43:46Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - CMP: Cooperative Motion Prediction with Multi-Agent Communication [21.60646440715162]
This paper explores the feasibility and effectiveness of cooperative motion prediction.
Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities.
In particular, CMP reduces the average prediction error by 16.4% with fewer missing detections.
arXiv Detail & Related papers (2024-03-26T17:53:27Z) - Towards Full-scene Domain Generalization in Multi-agent Collaborative
Bird's Eye View Segmentation for Connected and Autonomous Driving [54.60458503590669]
We propose a unified domain generalization framework applicable in both training and inference stages of collaborative perception.
We employ an Amplitude Augmentation (AmpAug) method to augment low-frequency image variations, broadening the model's ability to learn.
In the inference phase, we introduce an intra-system domain alignment mechanism to reduce or potentially eliminate the domain discrepancy.
arXiv Detail & Related papers (2023-11-28T12:52:49Z) - Cooperative Perception with Learning-Based V2V communications [11.772899644895281]
This work analyzes the performance of cooperative perception accounting for communications channel impairments.
A new late fusion scheme is proposed to leverage the robustness of intermediate features.
In order to compress the data size incurred by cooperation, a convolution neural network-based autoencoder is adopted.
arXiv Detail & Related papers (2023-11-17T05:41:23Z) - Adaptive Communications in Collaborative Perception with Domain Alignment for Autonomous Driving [21.11621380546942]
We propose ACC-DA, a channel-aware collaborative perception framework.
We first design a transmission delay minimization method, which can construct the communication graph.
We then propose an adaptive data reconstruction mechanism, which can dynamically adjust the rate-distortion trade-off to enhance perception efficiency.
arXiv Detail & Related papers (2023-09-15T03:53:35Z) - Interruption-Aware Cooperative Perception for V2X Communication-Aided
Autonomous Driving [49.42873226593071]
We propose V2X communication INterruption-aware COoperative Perception (V2X-INCOP) for V2X communication-aided autonomous driving.
We use historical cooperation information to recover missing information due to the interruptions and alleviate the impact of the interruption issue.
Experiments on three public cooperative perception datasets demonstrate that the proposed method is effective in alleviating the impacts of communication interruption on cooperative perception.
arXiv Detail & Related papers (2023-04-24T04:59:13Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.