Multi-View Radar Semantic Segmentation
- URL: http://arxiv.org/abs/2103.16214v1
- Date: Tue, 30 Mar 2021 09:56:41 GMT
- Title: Multi-View Radar Semantic Segmentation
- Authors: Arthur Ouaknine, Alasdair Newson, Patrick P\'erez, Florence Tupin,
Julien Rebut
- Abstract summary: Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
- Score: 3.2093811507874768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the scene around the ego-vehicle is key to assisted and
autonomous driving. Nowadays, this is mostly conducted using cameras and laser
scanners, despite their reduced performances in adverse weather conditions.
Automotive radars are low-cost active sensors that measure properties of
surrounding objects, including their relative speed, and have the key advantage
of not being impacted by rain, snow or fog. However, they are seldom used for
scene understanding due to the size and complexity of radar raw data and the
lack of annotated datasets. Fortunately, recent open-sourced datasets have
opened up research on classification, object detection and semantic
segmentation with raw radar signals using end-to-end trainable models. In this
work, we propose several novel architectures, and their associated losses,
which analyse multiple "views" of the range-angle-Doppler radar tensor to
segment it semantically. Experiments conducted on the recent CARRADA dataset
demonstrate that our best model outperforms alternative models, derived either
from the semantic segmentation of natural images or from radar scene
understanding, while requiring significantly fewer parameters. Both our code
and trained models will be released.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - A recurrent CNN for online object detection on raw radar frames [7.074916574419171]
This work presents a new recurrent CNN architecture for online radar object detection.
We propose an end-to-end trainable architecture mixing convolutions and ConvLSTMs to learn dependencies between successive frames.
Our model is causal and requires only the past information encoded in the memory of the ConvLSTMs to detect objects.
arXiv Detail & Related papers (2022-12-21T16:36:36Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler
Annotations [0.0]
We introduce CARRADA, a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations.
We also present a semi-automatic annotation approach, which was used to annotate the dataset, and a radar semantic segmentation baseline.
arXiv Detail & Related papers (2020-05-04T13:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.