RaLiFlow: Scene Flow Estimation with 4D Radar and LiDAR Point Clouds
- URL: http://arxiv.org/abs/2512.10376v1
- Date: Thu, 11 Dec 2025 07:41:33 GMT
- Title: RaLiFlow: Scene Flow Estimation with 4D Radar and LiDAR Point Clouds
- Authors: Jingyun Fu, Zhiyu Xiang, Na Zhao,
- Abstract summary: We build a Radar-LiDAR scene flow dataset based on a public real-world automotive dataset.<n>We introduce RaLiFlow, the first joint scene flow learning framework for 4D radar and LiDAR.<n>Our method outperforms existing LiDAR-based and radar-based single-modal methods by a significant margin.
- Score: 10.906975408529895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent multimodal fusion methods, integrating images with LiDAR point clouds, have shown promise in scene flow estimation. However, the fusion of 4D millimeter wave radar and LiDAR remains unexplored. Unlike LiDAR, radar is cheaper, more robust in various weather conditions and can detect point-wise velocity, making it a valuable complement to LiDAR. However, radar inputs pose challenges due to noise, low resolution, and sparsity. Moreover, there is currently no dataset that combines LiDAR and radar data specifically for scene flow estimation. To address this gap, we construct a Radar-LiDAR scene flow dataset based on a public real-world automotive dataset. We propose an effective preprocessing strategy for radar denoising and scene flow label generation, deriving more reliable flow ground truth for radar points out of the object boundaries. Additionally, we introduce RaLiFlow, the first joint scene flow learning framework for 4D radar and LiDAR, which achieves effective radar-LiDAR fusion through a novel Dynamic-aware Bidirectional Cross-modal Fusion (DBCF) module and a carefully designed set of loss functions. The DBCF module integrates dynamic cues from radar into the local cross-attention mechanism, enabling the propagation of contextual information across modalities. Meanwhile, the proposed loss functions mitigate the adverse effects of unreliable radar data during training and enhance the instance-level consistency in scene flow predictions from both modalities, particularly for dynamic foreground areas. Extensive experiments on the repurposed scene flow dataset demonstrate that our method outperforms existing LiDAR-based and radar-based single-modal methods by a significant margin.
Related papers
- HyperDet: 3D Object Detection with Hyper 4D Radar Point Clouds [7.899148878601621]
We present HyperDet, a detector-agnostic radar-only 3D detection framework.<n>It constructs a task-aware hyper 4D radar point cloud for standard LiDAR-oriented detectors.<n>On MAN TruckScenes, HyperDet consistently improves over raw radar inputs with VoxelNeXt and CenterPoint.
arXiv Detail & Related papers (2026-02-12T04:21:58Z) - RadarGen: Automotive Radar Point Cloud Generation from Cameras [64.69976771710057]
We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery.<n>RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form.<n>We show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data.
arXiv Detail & Related papers (2025-12-19T18:57:33Z) - 4D-RaDiff: Latent Diffusion for 4D Radar Point Cloud Generation [10.945807584683726]
We propose a novel framework to generate 4D radar point clouds for training and evaluating object detectors.<n>The proposed 4D-RaDiff converts unlabeled bounding boxes into high-quality radar annotations and transforms existing LiDAR point cloud data into realistic radar scenes.
arXiv Detail & Related papers (2025-12-16T09:43:05Z) - CORENet: Cross-Modal 4D Radar Denoising Network with LiDAR Supervision for Autonomous Driving [6.251434533663502]
4D radar-based object detection has garnered great attention for its robustness in adverse weather conditions.<n>The sparse and noisy nature of 4D radar point clouds poses substantial challenges for effective perception.<n>We present CORENet, a novel cross-modal denoising framework that leverages LiDAR supervision to identify noise patterns.
arXiv Detail & Related papers (2025-08-19T03:30:21Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data [8.552647576661174]
millimeter-wave radar sensor maintains stable performance under adverse environmental conditions.
Radar point clouds are relatively sparse and contain massive ghost points.
We propose a novel point cloud super-resolution approach for 3D mmWave radar data, named Radar-diffusion.
arXiv Detail & Related papers (2024-04-09T04:41:05Z) - RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR Features [15.686167262542297]
RadarDistill is a knowledge distillation (KD) method which can improve the representation of radar data by leveraging LiDAR data.<n>RadarDistill successfully transfers desirable characteristics of LiDAR features into radar features using three key components.<n>Our comparative analyses conducted on the nuScenes datasets demonstrate that RadarDistill achieves state-of-the-art (SOTA) performance for radar-only object detection task.
arXiv Detail & Related papers (2024-03-08T05:15:48Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Self-Supervised Scene Flow Estimation with 4D Automotive Radar [7.3287286038702035]
It remains largely unknown how to estimate the scene flow from a 4D radar.
Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution.
This work aims to address the above challenges and estimate scene flow from 4D radar point clouds by leveraging self-supervised learning.
arXiv Detail & Related papers (2022-03-02T14:28:12Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.