A Generalized Multi-Modal Fusion Detection Framework
- URL: http://arxiv.org/abs/2303.07064v3
- Date: Mon, 22 Jan 2024 13:26:32 GMT
- Title: A Generalized Multi-Modal Fusion Detection Framework
- Authors: Leichao Cui, Xiuxian Li, Min Meng, and Xiaoyu Mo
- Abstract summary: LiDAR point clouds have become the most common data source in autonomous driving.
Due to the sparsity of point clouds, accurate and reliable detection cannot be achieved in specific scenarios.
We propose a generic 3D detection framework called MMFusion, using multi-modal features.
- Score: 7.951044844083936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR point clouds have become the most common data source in autonomous
driving. However, due to the sparsity of point clouds, accurate and reliable
detection cannot be achieved in specific scenarios. Because of their
complementarity with point clouds, images are getting increasing attention.
Although with some success, existing fusion methods either perform hard fusion
or do not fuse in a direct manner. In this paper, we propose a generic 3D
detection framework called MMFusion, using multi-modal features. The framework
aims to achieve accurate fusion between LiDAR and images to improve 3D
detection in complex scenes. Our framework consists of two separate streams:
the LiDAR stream and the camera stream, which can be compatible with any
single-modal feature extraction network. The Voxel Local Perception Module in
the LiDAR stream enhances local feature representation, and then the
Multi-modal Feature Fusion Module selectively combines feature output from
different streams to achieve better fusion. Extensive experiments have shown
that our framework not only outperforms existing benchmarks but also improves
their detection, especially for detecting cyclists and pedestrians on KITTI
benchmarks, with strong robustness and generalization capabilities. Hopefully,
our work will stimulate more research into multi-modal fusion for autonomous
driving tasks.
Related papers
- MV2DFusion: Leveraging Modality-Specific Object Semantics for Multi-Modal 3D Detection [28.319440934322728]
MV2DFusion is a multi-modal detection framework that integrates the strengths of both worlds through an advanced query-based fusion mechanism.
Our framework's flexibility allows it to integrate with any image and point cloud-based detectors, showcasing its adaptability and potential for future advancements.
arXiv Detail & Related papers (2024-08-12T06:46:05Z) - MLF-DET: Multi-Level Fusion for Cross-Modal 3D Object Detection [54.52102265418295]
We propose a novel and effective Multi-Level Fusion network, named as MLF-DET, for high-performance cross-modal 3D object DETection.
For the feature-level fusion, we present the Multi-scale Voxel Image fusion (MVI) module, which densely aligns multi-scale voxel features with image features.
For the decision-level fusion, we propose the lightweight Feature-cued Confidence Rectification (FCR) module, which exploits image semantics to rectify the confidence of detection candidates.
arXiv Detail & Related papers (2023-07-18T11:26:02Z) - Multimodal Industrial Anomaly Detection via Hybrid Fusion [59.16333340582885]
We propose a novel multimodal anomaly detection method with hybrid fusion scheme.
Our model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTecD-3 AD dataset.
arXiv Detail & Related papers (2023-03-01T15:48:27Z) - FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection [11.962073589763676]
Existing 3D detectors significantly improve the accuracy by adopting a two-stage paradigm.
The sparsity of point clouds, especially for the points far away, makes it difficult for the LiDAR-only refinement module to accurately recognize and locate objects.
We propose a novel multi-modality two-stage approach named FusionRCNN, which effectively and efficiently fuses point clouds and camera images in the Regions of Interest(RoI)
FusionRCNN significantly improves the strong SECOND baseline by 6.14% mAP on baseline, and outperforms competing two-stage approaches.
arXiv Detail & Related papers (2022-09-22T02:07:25Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Interactive Multi-scale Fusion of 2D and 3D Features for Multi-object
Tracking [23.130490413184596]
We introduce PointNet++ to obtain multi-scale deep representations of point cloud to make it adaptive to our proposed Interactive Feature Fusion.
Our method can achieve good performance on the KITTI benchmark and outperform other approaches without using multi-scale feature fusion.
arXiv Detail & Related papers (2022-03-30T13:00:27Z) - DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection [83.18142309597984]
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving.
We develop a family of generic multi-modal 3D detection models named DeepFusion, which is more accurate than previous methods.
arXiv Detail & Related papers (2022-03-15T18:46:06Z) - EPNet++: Cascade Bi-directional Fusion for Multi-Modal 3D Object
Detection [56.03081616213012]
We propose EPNet++ for multi-modal 3D object detection by introducing a novel Cascade Bi-directional Fusion(CB-Fusion) module.
The proposed CB-Fusion module boosts the plentiful semantic information of point features with the image features in a cascade bi-directional interaction fusion manner.
The experiment results on the KITTI, JRDB and SUN-RGBD datasets demonstrate the superiority of EPNet++ over the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-21T10:48:34Z) - MBDF-Net: Multi-Branch Deep Fusion Network for 3D Object Detection [17.295359521427073]
We propose a Multi-Branch Deep Fusion Network (MBDF-Net) for 3D object detection.
In the first stage, our multi-branch feature extraction network utilizes Adaptive Attention Fusion modules to produce cross-modal fusion features from single-modal semantic features.
In the second stage, we use a region of interest (RoI) -pooled fusion module to generate enhanced local features for refinement.
arXiv Detail & Related papers (2021-08-29T15:40:15Z) - Spatio-Contextual Deep Network Based Multimodal Pedestrian Detection For
Autonomous Driving [1.2599533416395765]
This paper proposes an end-to-end multimodal fusion model for pedestrian detection using RGB and thermal images.
Its novel deep network architecture is capable of exploiting multimodal input efficiently.
The results on each of them improved the respective state-the-art performance.
arXiv Detail & Related papers (2021-05-26T17:50:36Z) - Multimodal Object Detection via Bayesian Fusion [59.31437166291557]
We study multimodal object detection with RGB and thermal cameras, since the latter can provide much stronger object signatures under poor illumination.
Our key contribution is a non-learned late-fusion method that fuses together bounding box detections from different modalities.
We apply our approach to benchmarks containing both aligned (KAIST) and unaligned (FLIR) multimodal sensor data.
arXiv Detail & Related papers (2021-04-07T04:03:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.