Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data
- URL: http://arxiv.org/abs/2212.03690v1
- Date: Wed, 7 Dec 2022 15:05:03 GMT
- Title: Gaussian Radar Transformer for Semantic Segmentation in Noisy Radar Data
- Authors: Matthias Zeller and Jens Behley and Michael Heidingsfeld and Cyrill
Stachniss
- Abstract summary: Scene understanding is crucial for autonomous robots in dynamic environments for making future state predictions, avoiding collisions, and path planning.
Camera and LiDAR perception made tremendous progress in recent years, but face limitations under adverse weather conditions.
To leverage the full potential of multi-modal sensor suites, radar sensors are essential for safety critical tasks and are already installed in most new vehicles today.
- Score: 33.457104508061015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene understanding is crucial for autonomous robots in dynamic environments
for making future state predictions, avoiding collisions, and path planning.
Camera and LiDAR perception made tremendous progress in recent years, but face
limitations under adverse weather conditions. To leverage the full potential of
multi-modal sensor suites, radar sensors are essential for safety critical
tasks and are already installed in most new vehicles today. In this paper, we
address the problem of semantic segmentation of moving objects in radar point
clouds to enhance the perception of the environment with another sensor
modality. Instead of aggregating multiple scans to densify the point clouds, we
propose a novel approach based on the self-attention mechanism to accurately
perform sparse, single-scan segmentation. Our approach, called Gaussian Radar
Transformer, includes the newly introduced Gaussian transformer layer, which
replaces the softmax normalization by a Gaussian function to decouple the
contribution of individual points. To tackle the challenge of the transformer
to capture long-range dependencies, we propose our attentive up- and
downsampling modules to enlarge the receptive field and capture strong spatial
relations. We compare our approach to other state-of-the-art methods on the
RadarScenes data set and show superior segmentation quality in diverse
environments, even without exploiting temporal information.
Related papers
- SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Radar Instance Transformer: Reliable Moving Instance Segmentation in
Sparse Radar Point Clouds [24.78323023852578]
LiDARs and cameras enhance scene interpretation but do not provide direct motion information and face limitations under adverse weather.
Radar sensors overcome these limitations and provide Doppler velocities, delivering direct information on dynamic objects.
Our Radar Instance Transformer enriches the current radar scan with temporal information without passing aggregated scans through a neural network.
arXiv Detail & Related papers (2023-09-28T13:37:30Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW
Radar [26.56755178602111]
We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions.
We exploit the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors.
We present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.
arXiv Detail & Related papers (2020-04-02T11:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.