Self-Supervised Moving Object Segmentation of Sparse and Noisy Radar Point Clouds
- URL: http://arxiv.org/abs/2511.02395v1
- Date: Tue, 04 Nov 2025 09:21:45 GMT
- Title: Self-Supervised Moving Object Segmentation of Sparse and Noisy Radar Point Clouds
- Authors: Leon Schwarzer, Matthias Zeller, Daniel Casado Herraez, Simon Dierl, Michael Heidingsfeld, Cyrill Stachniss,
- Abstract summary: Moving object segmentation is a crucial task for safe and reliable autonomous mobile systems like self-driving cars.<n> radar point clouds are often sparse and noisy, making data annotation for use in supervised learning very tedious, time-consuming, and cost-intensive.<n>We propose a novel clustering-based contrastive loss function with cluster refinement based on dynamic points removal to pretrain the network to produce motion-aware representations of the radar data.
- Score: 17.737940705639573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving object segmentation is a crucial task for safe and reliable autonomous mobile systems like self-driving cars, improving the reliability and robustness of subsequent tasks like SLAM or path planning. While the segmentation of camera or LiDAR data is widely researched and achieves great results, it often introduces an increased latency by requiring the accumulation of temporal sequences to gain the necessary temporal context. Radar sensors overcome this problem with their ability to provide a direct measurement of a point's Doppler velocity, which can be exploited for single-scan moving object segmentation. However, radar point clouds are often sparse and noisy, making data annotation for use in supervised learning very tedious, time-consuming, and cost-intensive. To overcome this problem, we address the task of self-supervised moving object segmentation of sparse and noisy radar point clouds. We follow a two-step approach of contrastive self-supervised representation learning with subsequent supervised fine-tuning using limited amounts of annotated data. We propose a novel clustering-based contrastive loss function with cluster refinement based on dynamic points removal to pretrain the network to produce motion-aware representations of the radar data. Our method improves label efficiency after fine-tuning, effectively boosting state-of-the-art performance by self-supervised pretraining.
Related papers
- Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds [23.59980120024823]
In this paper, we tackle the problem of moving object segmentation in noisy radar point clouds.<n>We develop a novel transformer-based approach to perform single-scan moving object segmentation in sparse radar scans accurately.<n>Our network runs faster than the frame rate of the sensor and shows superior segmentation results using only single-scan radar data.
arXiv Detail & Related papers (2025-07-04T10:39:13Z) - Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds [25.36192517603375]
We address moving instance tracking in sparse radar point clouds to enhance scene interpretation.<n>We propose a learning-based radar tracker incorporating temporal offset predictions to enable direct center-based association.<n>Our approach shows an improved performance on the moving instance tracking benchmark of the RadarScenes dataset.
arXiv Detail & Related papers (2025-07-04T09:57:28Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Self-Supervised Class-Agnostic Motion Prediction with Spatial and Temporal Consistency Regularizations [53.797896854533384]
Class-agnostic motion prediction methods directly predict the motion of the entire point cloud.
While most existing methods rely on fully-supervised learning, the manual labeling of point cloud data is laborious and time-consuming.
We introduce three simple spatial and temporal regularization losses, which facilitate the self-supervised training process effectively.
arXiv Detail & Related papers (2024-03-20T02:58:45Z) - Leveraging Self-Supervised Instance Contrastive Learning for Radar
Object Detection [7.728838099011661]
This paper presents RiCL, an instance contrastive learning framework to pre-train radar object detectors.
We aim to pre-train an object detector's backbone, head and neck to learn with fewer data.
arXiv Detail & Related papers (2024-02-13T12:53:33Z) - Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds [24.78323023852578]
LiDARs and cameras enhance scene interpretation but do not provide direct motion information and face limitations under adverse weather.
Radar sensors overcome these limitations and provide Doppler velocities, delivering direct information on dynamic objects.
Our Radar Instance Transformer enriches the current radar scan with temporal information without passing aggregated scans through a neural network.
arXiv Detail & Related papers (2023-09-28T13:37:30Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Streaming Object Detection for 3-D Point Clouds [29.465873948076766]
LiDAR provides a prominent sensory modality that informs many existing perceptual systems.
The latency for perceptual systems based on point cloud data can be dominated by the amount of time for a complete rotational scan.
We show how operating on LiDAR data in its native streaming formulation offers several advantages for self driving object detection.
arXiv Detail & Related papers (2020-05-04T21:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.