Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds
- URL: http://arxiv.org/abs/2507.03441v1
- Date: Fri, 04 Jul 2025 09:57:28 GMT
- Title: Radar Tracker: Moving Instance Tracking in Sparse and Noisy Radar Point Clouds
- Authors: Matthias Zeller, Daniel Casado Herraez, Jens Behley, Michael Heidingsfeld, Cyrill Stachniss,
- Abstract summary: We address moving instance tracking in sparse radar point clouds to enhance scene interpretation.<n>We propose a learning-based radar tracker incorporating temporal offset predictions to enable direct center-based association.<n>Our approach shows an improved performance on the moving instance tracking benchmark of the RadarScenes dataset.
- Score: 25.36192517603375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots and autonomous vehicles should be aware of what happens in their surroundings. The segmentation and tracking of moving objects are essential for reliable path planning, including collision avoidance. We investigate this estimation task for vehicles using radar sensing. We address moving instance tracking in sparse radar point clouds to enhance scene interpretation. We propose a learning-based radar tracker incorporating temporal offset predictions to enable direct center-based association and enhance segmentation performance by including additional motion cues. We implement attention-based tracking for sparse radar scans to include appearance features and enhance performance. The final association combines geometric and appearance features to overcome the limitations of center-based tracking to associate instances reliably. Our approach shows an improved performance on the moving instance tracking benchmark of the RadarScenes dataset compared to the current state of the art.
Related papers
- SemRaFiner: Panoptic Segmentation in Sparse and Noisy Radar Point Clouds [23.935019339778236]
We address the problem of panoptic segmentation in sparse radar point clouds.<n>Our approach, called SemRaFiner, accounts for changing density in sparse radar point clouds.<n>Our experiments suggest that our approach outperforms state-of-the-art methods for radar-based panoptic segmentation.
arXiv Detail & Related papers (2025-07-09T14:45:18Z) - Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds [23.59980120024823]
In this paper, we tackle the problem of moving object segmentation in noisy radar point clouds.<n>We develop a novel transformer-based approach to perform single-scan moving object segmentation in sparse radar scans accurately.<n>Our network runs faster than the frame rate of the sensor and shows superior segmentation results using only single-scan radar data.
arXiv Detail & Related papers (2025-07-04T10:39:13Z) - Multi-Object Tracking based on Imaging Radar 3D Object Detection [0.13499500088995461]
This paper presents an approach for tracking surrounding traffic participants with a classical tracking algorithm.
Learning based object detectors have been shown to work adequately on lidar and camera data, while learning based object detectors using standard radar data input have proven to be inferior.
With the improvements to radar sensor technology in the form of imaging radars, the object detection performance on radar was greatly improved but is still limited compared to lidar sensors due to the sparsity of the radar point cloud.
The tracking algorithm must overcome the limited detection quality while generating consistent tracks.
arXiv Detail & Related papers (2024-06-03T05:46:23Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds [24.78323023852578]
LiDARs and cameras enhance scene interpretation but do not provide direct motion information and face limitations under adverse weather.
Radar sensors overcome these limitations and provide Doppler velocities, delivering direct information on dynamic objects.
Our Radar Instance Transformer enriches the current radar scan with temporal information without passing aggregated scans through a neural network.
arXiv Detail & Related papers (2023-09-28T13:37:30Z) - RaTrack: Moving Object Detection and Tracking with 4D Radar Point Cloud [10.593320435411714]
We introduce RaTrack, an innovative solution tailored for radar-based tracking.
Our method focuses on motion segmentation and clustering, enriched by a motion estimation module.
RaTrack showcases superior tracking precision of moving objects, largely surpassing the performance of the state of the art.
arXiv Detail & Related papers (2023-09-18T13:02:29Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - AiATrack: Attention in Attention for Transformer Visual Tracking [89.94386868729332]
Transformer trackers have achieved impressive advancements recently, where the attention mechanism plays an important role.
We propose an attention in attention (AiA) module, which enhances appropriate correlations and suppresses erroneous ones by seeking consensus among all correlation vectors.
Our AiA module can be readily applied to both self-attention blocks and cross-attention blocks to facilitate feature aggregation and information propagation for visual tracking.
arXiv Detail & Related papers (2022-07-20T00:44:03Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.