Multi-Task Cross-Modality Attention-Fusion for 2D Object Detection
- URL: http://arxiv.org/abs/2307.08339v1
- Date: Mon, 17 Jul 2023 09:26:13 GMT
- Title: Multi-Task Cross-Modality Attention-Fusion for 2D Object Detection
- Authors: Huawei Sun, Hao Feng, Georg Stettinger, Lorenzo Servadei, Robert Wille
- Abstract summary: We propose two new radar preprocessing techniques to better align radar and camera data.
We also introduce a Multi-Task Cross-Modality Attention-Fusion Network (MCAF-Net) for object detection.
Our approach outperforms current state-of-the-art radar-camera fusion-based object detectors in the nuScenes dataset.
- Score: 6.388430091498446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and robust object detection is critical for autonomous driving.
Image-based detectors face difficulties caused by low visibility in adverse
weather conditions. Thus, radar-camera fusion is of particular interest but
presents challenges in optimally fusing heterogeneous data sources. To approach
this issue, we propose two new radar preprocessing techniques to better align
radar and camera data. In addition, we introduce a Multi-Task Cross-Modality
Attention-Fusion Network (MCAF-Net) for object detection, which includes two
new fusion blocks. These allow for exploiting information from the feature maps
more comprehensively. The proposed algorithm jointly detects objects and
segments free space, which guides the model to focus on the more relevant part
of the scene, namely, the occupied space. Our approach outperforms current
state-of-the-art radar-camera fusion-based object detectors in the nuScenes
dataset and achieves more robust results in adverse weather conditions and
nighttime scenarios.
Related papers
- MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection [3.1212590312985986]
sparsity of radar point clouds poses challenges in achieving precise object detection.
This paper introduces a comprehensive feature extraction method for radar point clouds.
We achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.
arXiv Detail & Related papers (2024-08-01T13:52:18Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and
Camera Fusion [6.639648061168067]
Multi-view radar-camera fused 3D object detection provides a farther detection range and more helpful features for autonomous driving.
Current radar-camera fusion methods deliver kinds of designs to fuse radar information with camera data.
We present MVFusion, a novel Multi-View radar-camera Fusion method to achieve semantic-aligned radar features.
arXiv Detail & Related papers (2023-02-21T08:25:50Z) - Bridging the View Disparity of Radar and Camera Features for Multi-modal
Fusion 3D Object Detection [6.959556180268547]
This paper focuses on how to utilize millimeter-wave (MMW) radar and camera sensor fusion for 3D object detection.
A novel method which realizes the feature-level fusion under bird-eye view (BEV) for a better feature representation is proposed.
arXiv Detail & Related papers (2022-08-25T13:21:37Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with
Transformers [49.689566246504356]
We propose TransFusion, a robust solution to LiDAR-camera fusion with a soft-association mechanism to handle inferior image conditions.
TransFusion achieves state-of-the-art performance on large-scale datasets.
We extend the proposed method to the 3D tracking task and achieve the 1st place in the leaderboard of nuScenes tracking.
arXiv Detail & Related papers (2022-03-22T07:15:13Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - YOdar: Uncertainty-based Sensor Fusion for Vehicle Detection with Camera
and Radar Sensors [4.396860522241306]
We present an uncertainty-based method for sensor fusion with camera and radar data.
In our experiments we combine the YOLOv3 object detection network with a customized $1D$ radar segmentation network.
Our experiments show, that this approach of uncertainty aware fusion significantly gains performance compared to single sensor baselines.
arXiv Detail & Related papers (2020-10-07T10:40:02Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.