Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds
- URL: http://arxiv.org/abs/2507.03463v1
- Date: Fri, 04 Jul 2025 10:39:13 GMT
- Title: Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds
- Authors: Matthias Zeller, Vardeep S. Sandhu, Benedikt Mersch, Jens Behley, Michael Heidingsfeld, Cyrill Stachniss,
- Abstract summary: In this paper, we tackle the problem of moving object segmentation in noisy radar point clouds.<n>We develop a novel transformer-based approach to perform single-scan moving object segmentation in sparse radar scans accurately.<n>Our network runs faster than the frame rate of the sensor and shows superior segmentation results using only single-scan radar data.
- Score: 23.59980120024823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The awareness about moving objects in the surroundings of a self-driving vehicle is essential for safe and reliable autonomous navigation. The interpretation of LiDAR and camera data achieves exceptional results but typically requires to accumulate and process temporal sequences of data in order to extract motion information. In contrast, radar sensors, which are already installed in most recent vehicles, can overcome this limitation as they directly provide the Doppler velocity of the detections and, hence incorporate instantaneous motion information within a single measurement. % In this paper, we tackle the problem of moving object segmentation in noisy radar point clouds. We also consider differentiating parked from moving cars, to enhance scene understanding. Instead of exploiting temporal dependencies to identify moving objects, we develop a novel transformer-based approach to perform single-scan moving object segmentation in sparse radar scans accurately. The key to our Radar Velocity Transformer is to incorporate the valuable velocity information throughout each module of the network, thereby enabling the precise segmentation of moving and non-moving objects. Additionally, we propose a transformer-based upsampling, which enhances the performance by adaptively combining information and overcoming the limitation of interpolation of sparse point clouds. Finally, we create a new radar moving object segmentation benchmark based on the RadarScenes dataset and compare our approach to other state-of-the-art methods. Our network runs faster than the frame rate of the sensor and shows superior segmentation results using only single-scan radar data.
Related papers
- SemRaFiner: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds [23.935019339778236]
We address the problem of panoptic segmentation in sparse radar point clouds.<n>Our approach, called SemRaFiner, accounts for changing density in sparse radar point clouds.<n>Our experiments suggest that our approach outperforms state-of-the-art methods for radar-based panoptic segmentation.
arXiv Detail & Related papers (2025-07-09T14:45:18Z) - Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds [25.36192517603375]
We address moving instance tracking in sparse radar point clouds to enhance scene interpretation.<n>We propose a learning-based radar tracker incorporating temporal offset predictions to enable direct center-based association.<n>Our approach shows an improved performance on the moving instance tracking benchmark of the RadarScenes dataset.
arXiv Detail & Related papers (2025-07-04T09:57:28Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds [24.78323023852578]
LiDARs and cameras enhance scene interpretation but do not provide direct motion information and face limitations under adverse weather.
Radar sensors overcome these limitations and provide Doppler velocities, delivering direct information on dynamic objects.
Our Radar Instance Transformer enriches the current radar scan with temporal information without passing aggregated scans through a neural network.
arXiv Detail & Related papers (2023-09-28T13:37:30Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data [33.457104508061015]
Scene understanding is crucial for autonomous robots in dynamic environments for making future state predictions, avoiding collisions, and path planning.
Camera and LiDAR perception made tremendous progress in recent years, but face limitations under adverse weather conditions.
To leverage the full potential of multi-modal sensor suites, radar sensors are essential for safety critical tasks and are already installed in most new vehicles today.
arXiv Detail & Related papers (2022-12-07T15:05:03Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.