CoFF: Cooperative Spatial Feature Fusion for 3D Object Detection on
Autonomous Vehicles
- URL: http://arxiv.org/abs/2009.11975v1
- Date: Thu, 24 Sep 2020 22:51:50 GMT
- Title: CoFF: Cooperative Spatial Feature Fusion for 3D Object Detection on
Autonomous Vehicles
- Authors: Jingda Guo, Dominic Carrillo, Sihai Tang, Qi Chen, Qing Yang, Song Fu,
Xi Wang, Nannan Wang, Paparao Palacharla
- Abstract summary: CoFF achieves a significant improvement in terms of both detection precision and effective detection range for autonomous vehicles.
Results show that CoFF achieves a significant improvement in terms of both detection precision and effective detection range for autonomous vehicles.
- Score: 20.333191597167847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To reduce the amount of transmitted data, feature map based fusion is
recently proposed as a practical solution to cooperative 3D object detection by
autonomous vehicles. The precision of object detection, however, may require
significant improvement, especially for objects that are far away or occluded.
To address this critical issue for the safety of autonomous vehicles and human
beings, we propose a cooperative spatial feature fusion (CoFF) method for
autonomous vehicles to effectively fuse feature maps for achieving a higher 3D
object detection performance. Specially, CoFF differentiates weights among
feature maps for a more guided fusion, based on how much new semantic
information is provided by the received feature maps. It also enhances the
inconspicuous features corresponding to far/occluded objects to improve their
detection precision. Experimental results show that CoFF achieves a significant
improvement in terms of both detection precision and effective detection range
for autonomous vehicles, compared to previous feature fusion solutions.
Related papers
- Kaninfradet3D:A Road-side Camera-LiDAR Fusion 3D Perception Model based on Nonlinear Feature Extraction and Intrinsic Correlation [7.944126168010804]
With the development of AI-assisted driving, numerous methods have emerged for ego-vehicle 3D perception tasks.
With its ability to provide a global view and a broader sensing range, the roadside perspective is worth developing.
This paper proposes Kaninfradet3D, which optimize the feature extraction and fusion modules.
arXiv Detail & Related papers (2024-10-21T09:28:42Z) - Cross-Cluster Shifting for Efficient and Effective 3D Object Detection
in Autonomous Driving [69.20604395205248]
We present a new 3D point-based detector model, named Shift-SSD, for precise 3D object detection in autonomous driving.
We introduce an intriguing Cross-Cluster Shifting operation to unleash the representation capacity of the point-based detector.
We conduct extensive experiments on the KITTI, runtime, and nuScenes datasets, and the results demonstrate the state-of-the-art performance of Shift-SSD.
arXiv Detail & Related papers (2024-03-10T10:36:32Z) - An Empirical Analysis of Range for 3D Object Detection [70.54345282696138]
We present an empirical analysis of far-field 3D detection using the long-range detection dataset Argoverse 2.0.
Near-field LiDAR measurements are dense and optimally encoded by small voxels, while far-field measurements are sparse and are better encoded with large voxels.
We propose simple techniques to efficiently ensemble models for long-range detection that improve efficiency by 33% and boost accuracy by 3.2% CDS.
arXiv Detail & Related papers (2023-08-08T05:29:26Z) - DeepFusion: A Robust and Modular 3D Object Detector for Lidars, Cameras
and Radars [2.2166853714891057]
We propose a modular multi-modal architecture to fuse lidars, cameras and radars in different combinations for 3D object detection.
Specialized feature extractors take advantage of each modality and can be exchanged easily, making the approach simple and flexible.
Experimental results for lidar-camera, lidar-camera-radar and camera-radar fusion show the flexibility and effectiveness of our fusion approach.
arXiv Detail & Related papers (2022-09-26T14:33:30Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Adaptive Feature Fusion for Cooperative Perception using LiDAR Point
Clouds [0.0]
Cooperative perception allows a Connected Autonomous Vehicle to interact with the other CAVs in the vicinity.
It can compensate for the limitations of the conventional vehicular perception such as blind spots, low resolution, and weather effects.
We evaluate the performance of cooperative perception for both vehicle and pedestrian detection using the CODD dataset.
arXiv Detail & Related papers (2022-07-30T01:53:05Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Collaborative 3D Object Detection for Automatic Vehicle Systems via
Learnable Communications [8.633120731620307]
We propose a novel collaborative 3D object detection framework that consists of three components.
Experiment results and bandwidth usage analysis demonstrate that our approach can save communication and computation costs.
arXiv Detail & Related papers (2022-05-24T07:17:32Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Multi-View Adaptive Fusion Network for 3D Object Detection [14.506796247331584]
3D object detection based on LiDAR-camera fusion is becoming an emerging research theme for autonomous driving.
We propose a single-stage multi-view fusion framework that takes LiDAR bird's-eye view, LiDAR range view and camera view images as inputs for 3D object detection.
We design an end-to-end learnable network named MVAF-Net to integrate these two components.
arXiv Detail & Related papers (2020-11-02T00:06:01Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.