Adaptive Feature Fusion for Cooperative Perception using LiDAR Point
Clouds
- URL: http://arxiv.org/abs/2208.00116v1
- Date: Sat, 30 Jul 2022 01:53:05 GMT
- Title: Adaptive Feature Fusion for Cooperative Perception using LiDAR Point
Clouds
- Authors: D. Qiao and F. Zulkernine
- Abstract summary: Cooperative perception allows a Connected Autonomous Vehicle to interact with the other CAVs in the vicinity.
It can compensate for the limitations of the conventional vehicular perception such as blind spots, low resolution, and weather effects.
We evaluate the performance of cooperative perception for both vehicle and pedestrian detection using the CODD dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative perception allows a Connected Autonomous Vehicle (CAV) to
interact with the other CAVs in the vicinity to enhance perception of
surrounding objects to increase safety and reliability. It can compensate for
the limitations of the conventional vehicular perception such as blind spots,
low resolution, and weather effects. An effective feature fusion model for the
intermediate fusion methods of cooperative perception can improve feature
selection and information aggregation to further enhance the perception
accuracy. We propose adaptive feature fusion models with trainable feature
selection modules. One of our proposed models Spatial-wise Adaptive feature
Fusion (S-AdaFusion) outperforms all other state-of-the-art models on the two
subsets of OPV2V dataset: default CARLA towns for vehicle detection and the
Culver City for domain adaptation. In addition, previous studies have only
tested cooperative perception for vehicle detection. A pedestrian, however, is
much more likely to be seriously injured in a traffic accident. We evaluate the
performance of cooperative perception for both vehicle and pedestrian detection
using the CODD dataset. Our architecture achieves higher Average Precision (AP)
than other existing models for both vehicle and pedestrian detection on the
CODD dataset. The experiments demonstrate that cooperative perception also can
improve the pedestrian detection accuracy compared to the conventional
perception process.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - HEAD: A Bandwidth-Efficient Cooperative Perception Approach for Heterogeneous Connected and Autonomous Vehicles [9.10239345027499]
HEAD is a method that fuses features from the classification and regression heads in 3D object detection networks.
Our experiments demonstrate that HEAD is a fusion method that effectively balances communication bandwidth and perception performance.
arXiv Detail & Related papers (2024-08-27T22:05:44Z) - MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - Enhanced Cooperative Perception for Autonomous Vehicles Using Imperfect Communication [0.24466725954625887]
We propose a novel approach to realize an optimized Cooperative Perception (CP) under constrained communications.
At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range.
Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception.
arXiv Detail & Related papers (2024-04-10T15:37:15Z) - CMP: Cooperative Motion Prediction with Multi-Agent Communication [21.60646440715162]
This paper explores the feasibility and effectiveness of cooperative motion prediction.
Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities.
In particular, CMP reduces the average prediction error by 16.4% with fewer missing detections.
arXiv Detail & Related papers (2024-03-26T17:53:27Z) - SiCP: Simultaneous Individual and Cooperative Perception for 3D Object Detection in Connected and Automated Vehicles [18.23919432049492]
Cooperative perception for connected and automated vehicles is traditionally achieved through the fusion of feature maps from two or more vehicles.
This drawback impedes the adoption of cooperative perception as vehicle resources are often insufficient to concurrently employ two perception models.
We present Simultaneous Individual and Cooperative Perception (SiCP), a generic framework that supports a wide range of the state-of-the-art standalone perception backbones.
arXiv Detail & Related papers (2023-12-08T04:12:26Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - CoFF: Cooperative Spatial Feature Fusion for 3D Object Detection on
Autonomous Vehicles [20.333191597167847]
CoFF achieves a significant improvement in terms of both detection precision and effective detection range for autonomous vehicles.
Results show that CoFF achieves a significant improvement in terms of both detection precision and effective detection range for autonomous vehicles.
arXiv Detail & Related papers (2020-09-24T22:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.