RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW
Radar
- URL: http://arxiv.org/abs/2004.03451v1
- Date: Thu, 2 Apr 2020 11:40:26 GMT
- Title: RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW
Radar
- Authors: Prannay Kaul, Daniele De Martini, Matthew Gadd, Paul Newman
- Abstract summary: We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions.
We exploit the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors.
We present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.
- Score: 26.56755178602111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an efficient annotation procedure and an application
thereof to end-to-end, rich semantic segmentation of the sensed environment
using FMCW scanning radar. We advocate radar over the traditional sensors used
for this task as it operates at longer ranges and is substantially more robust
to adverse weather and illumination conditions. We avoid laborious manual
labelling by exploiting the largest radar-focused urban autonomy dataset
collected to date, correlating radar scans with RGB cameras and LiDAR sensors,
for which semantic segmentation is an already consolidated procedure. The
training procedure leverages a state-of-the-art natural image segmentation
system which is publicly available and as such, in contrast to previous
approaches, allows for the production of copious labels for the radar stream by
incorporating four camera and two LiDAR streams. Additionally, the losses are
computed taking into account labels to the radar sensor horizon by accumulating
LiDAR returns along a pose-chain ahead and behind of the current vehicle
position. Finally, we present the network with multi-channel radar scan inputs
in order to deal with ephemeral and dynamic scene objects.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - RadarLCD: Learnable Radar-based Loop Closure Detection Pipeline [4.09225917049674]
This research introduces RadarLCD, a novel supervised deep learning pipeline specifically designed for Loop Closure Detection.
RadarLCD makes a significant contribution by leveraging the pre-trained HERO (Hybrid Estimation Radar Odometry) model.
The methodology undergoes evaluation across a variety of FMCW Radar dataset scenes.
arXiv Detail & Related papers (2023-09-13T17:10:23Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Deep Radar Inverse Sensor Models for Dynamic Occupancy Grid Maps [0.0]
We propose a deep learning-based Inverse Sensor Model (ISM) to learn the mapping from sparse radar detections to polar measurement grids.
Our approach is the first one to learn a single-frame measurement grid in the polar scheme from radars with a limited Field Of View.
This enables us to flexibly use one or more radar sensors without network retraining and without requirements on 360deg sensor coverage.
arXiv Detail & Related papers (2023-05-21T09:09:23Z) - Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment [38.24705460170415]
We propose a new dataset, named CRUW, with a systematic annotator and performance evaluation system.
CRUW aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
To the best of our knowledge, CRUW is the first public large-scale dataset with a systematic annotation and evaluation system.
arXiv Detail & Related papers (2021-05-11T17:13:45Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Radar Artifact Labeling Framework (RALF): Method for Plausible Radar
Detections in Datasets [2.5899040911480187]
We propose a cross sensor Radar Artifact Labeling Framework (RALF) for labeling sparse radar point clouds.
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot106$ points.
arXiv Detail & Related papers (2020-12-03T15:11:31Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.