Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds
- URL: http://arxiv.org/abs/2309.16435v1
- Date: Thu, 28 Sep 2023 13:37:30 GMT
- Title: Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds
- Authors: Matthias Zeller and Vardeep S. Sandhu and Benedikt Mersch and Jens
Behley and Michael Heidingsfeld and Cyrill Stachniss
- Abstract summary: LiDARs and cameras enhance scene interpretation but do not provide direct motion information and face limitations under adverse weather.
Radar sensors overcome these limitations and provide Doppler velocities, delivering direct information on dynamic objects.
Our Radar Instance Transformer enriches the current radar scan with temporal information without passing aggregated scans through a neural network.
- Score: 24.78323023852578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The perception of moving objects is crucial for autonomous robots performing
collision avoidance in dynamic environments. LiDARs and cameras tremendously
enhance scene interpretation but do not provide direct motion information and
face limitations under adverse weather. Radar sensors overcome these
limitations and provide Doppler velocities, delivering direct information on
dynamic objects. In this paper, we address the problem of moving instance
segmentation in radar point clouds to enhance scene interpretation for
safety-critical tasks. Our Radar Instance Transformer enriches the current
radar scan with temporal information without passing aggregated scans through a
neural network. We propose a full-resolution backbone to prevent information
loss in sparse point cloud processing. Our instance transformer head
incorporates essential information to enhance segmentation but also enables
reliable, class-agnostic instance assignments. In sum, our approach shows
superior performance on the new moving instance segmentation benchmarks,
including diverse environments, and provides model-agnostic modules to enhance
scene interpretation. The benchmark is based on the RadarScenes dataset and
will be made available upon acceptance.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Leveraging Self-Supervised Instance Contrastive Learning for Radar
Object Detection [7.728838099011661]
This paper presents RiCL, an instance contrastive learning framework to pre-train radar object detectors.
We aim to pre-train an object detector's backbone, head and neck to learn with fewer data.
arXiv Detail & Related papers (2024-02-13T12:53:33Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data [33.457104508061015]
Scene understanding is crucial for autonomous robots in dynamic environments for making future state predictions, avoiding collisions, and path planning.
Camera and LiDAR perception made tremendous progress in recent years, but face limitations under adverse weather conditions.
To leverage the full potential of multi-modal sensor suites, radar sensors are essential for safety critical tasks and are already installed in most new vehicles today.
arXiv Detail & Related papers (2022-12-07T15:05:03Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.