Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection
- URL: http://arxiv.org/abs/2306.01438v1
- Date: Fri, 2 Jun 2023 10:57:41 GMT
- Title: Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection
- Authors: Yingjie Wang, Jiajun Deng, Yao Li, Jinshui Hu, Cong Liu, Yu Zhang,
Jianmin Ji, Wanli Ouyang, Yanyong Zhang
- Abstract summary: We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
- Score: 78.59426158981108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR and Radar are two complementary sensing approaches in that LiDAR
specializes in capturing an object's 3D shape while Radar provides longer
detection ranges as well as velocity hints. Though seemingly natural, how to
efficiently combine them for improved feature representation is still unclear.
The main challenge arises from that Radar data are extremely sparse and lack
height information. Therefore, directly integrating Radar features into
LiDAR-centric detection networks is not optimal. In this work, we introduce a
bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the
challenges and improve 3D detection for dynamic objects. Technically,
Bi-LRFusion involves two steps: first, it enriches Radar's local features by
learning important details from the LiDAR branch to alleviate the problems
caused by the absence of height information and extreme sparsity; second, it
combines LiDAR features with the enhanced Radar features in a unified
bird's-eye-view representation. We conduct extensive experiments on nuScenes
and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art
performance for detecting dynamic objects. Notably, Radar data in these two
datasets have different formats, which demonstrates the generalizability of our
method. Codes are available at https://github.com/JessieW0806/BiLRFusion.
Related papers
- LEROjD: Lidar Extended Radar-Only Object Detection [0.22870279047711525]
3+1D imaging radar sensors offer a cost-effective, robust alternative to lidar.
Although lidar should not be used during inference, it can aid the training of radar-only object detectors.
We explore two strategies to transfer knowledge from the lidar to the radar domain and radar-only object detectors.
arXiv Detail & Related papers (2024-09-09T12:43:25Z) - Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features [15.686167262542297]
RadarDistill is a knowledge distillation (KD) method which can improve the representation of radar data by leveraging LiDAR data.
RadarDistill successfully transfers desirable characteristics of LiDAR features into radar features using three key components.
Our comparative analyses conducted on the nuScenes datasets demonstrate that RadarDistill achieves state-of-the-art (SOTA) performance for radar-only object detection task.
arXiv Detail & Related papers (2024-03-08T05:15:48Z) - Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal
Feature Augmentation [7.364627166256136]
This paper presents a novel framework for robust 3D object detection from point clouds via cross-modal hallucination.
We introduce multiple alignments on both spatial and feature levels to achieve simultaneous backbone refinement and hallucination generation.
Experiments on the View-of-Delft dataset show that our proposed method outperforms the state-of-the-art (SOTA) methods for both radar and LiDAR object detection.
arXiv Detail & Related papers (2023-09-29T15:46:59Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.