LRFusionPR: A Polar BEV-Based LiDAR-Radar Fusion Network for Place Recognition
- URL: http://arxiv.org/abs/2504.19186v3
- Date: Thu, 02 Oct 2025 09:35:01 GMT
- Title: LRFusionPR: A Polar BEV-Based LiDAR-Radar Fusion Network for Place Recognition
- Authors: Zhangshuo Qi, Luqi Cheng, Zijie Zhou, Guangming Xiong,
- Abstract summary: In autonomous driving, place recognition is critical for global localization in GPS-denied environments.<n>We propose LRFusionPR, which improves recognition accuracy and robustness by fusing LiDAR with either single-chip or scanning radar.
- Score: 2.77989705536351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving, place recognition is critical for global localization in GPS-denied environments. LiDAR and radar-based place recognition methods have garnered increasing attention, as LiDAR provides precise ranging, whereas radar excels in adverse weather resilience. However, effectively leveraging LiDAR-radar fusion for place recognition remains challenging. The noisy and sparse nature of radar data limits its potential to further improve recognition accuracy. In addition, heterogeneous radar configurations complicate the development of unified cross-modality fusion frameworks. In this paper, we propose LRFusionPR, which improves recognition accuracy and robustness by fusing LiDAR with either single-chip or scanning radar. Technically, a dual-branch network is proposed to fuse different modalities within the unified polar coordinate bird's eye view (BEV) representation. In the fusion branch, cross-attention is utilized to perform cross-modality feature interactions. The knowledge from the fusion branch is simultaneously transferred to the distillation branch, which takes radar as its only input to further improve the robustness. Ultimately, the descriptors from both branches are concatenated, producing the multimodal global descriptor for place retrieval. Extensive evaluations on multiple datasets demonstrate that our LRFusionPR achieves accurate place recognition, while maintaining robustness under varying weather conditions. Our open-source code will be released at https://github.com/QiZS-BIT/LRFusionPR.
Related papers
- Position-Aware Self-supervised Representation Learning for Cross-mode Radar Signal Recognition [45.534027689090664]
We propose a position-aware self-supervised framework that leverages pulse-level temporal dynamics without complex augmentations or masking.<n>Using this framework, we evaluate cross-mode radar signal recognition under the long-tailed setting to assess adaptability and generalization.<n> Experimental results demonstrate enhanced discriminability and robustness, highlighting practical applicability in real-world electromagnetic environments.
arXiv Detail & Related papers (2026-01-30T16:37:24Z) - RadarGen: Automotive Radar Point Cloud Generation from Cameras [64.69976771710057]
We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery.<n>RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form.<n>We show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data.
arXiv Detail & Related papers (2025-12-19T18:57:33Z) - RaLiFlow: Scene Flow Estimation with 4D Radar and LiDAR Point Clouds [10.906975408529895]
We build a Radar-LiDAR scene flow dataset based on a public real-world automotive dataset.<n>We introduce RaLiFlow, the first joint scene flow learning framework for 4D radar and LiDAR.<n>Our method outperforms existing LiDAR-based and radar-based single-modal methods by a significant margin.
arXiv Detail & Related papers (2025-12-11T07:41:33Z) - RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - Multistatic-Radar RCS-Signature Recognition of Aerial Vehicles: A Bayesian Fusion Approach [10.908489565519211]
Radar Automated Target Recognition (RATR) for Unmanned Aerial Vehicles (UAVs) involves transmitting Electromagnetic Waves (EMWs) and performing target type recognition on the received radar echo.
Previous studies highlighted the advantages of multistatic radar configurations over monostatic ones in RATR.
We propose a fully Bayesian RATR framework employing Optimal Bayesian Fusion (OBF) to aggregate classification probability vectors from multiple radars.
arXiv Detail & Related papers (2024-02-28T02:11:47Z) - RaLF: Flow-based Global and Metric Radar Localization in LiDAR Maps [8.625083692154414]
We propose RaLF, a novel deep neural network-based approach for localizing radar scans in a LiDAR map of the environment.
RaLF is composed of radar and LiDAR feature encoders, a place recognition head that generates global descriptors, and a metric localization head that predicts the 3-DoF transformation between the radar scan and the map.
We extensively evaluate our approach on multiple real-world driving datasets and show that RaLF achieves state-of-the-art performance for both place recognition and metric localization.
arXiv Detail & Related papers (2023-09-18T15:37:01Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.