ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals
- URL: http://arxiv.org/abs/2209.12940v1
- Date: Mon, 26 Sep 2022 18:23:22 GMT
- Title: ERASE-Net: Efficient Segmentation Networks for Automotive Radar Signals
- Authors: Shihong Fang, Haoran Zhu, Devansh Bisla, Anna Choromanska, Satish
Ravindran, Dongyin Ren, Ryan Wu
- Abstract summary: We introduce ERASE-Net, an Efficient RAdar SEgmentation Network to segment raw radar signals semantically.
We show that our method can achieve superior performance on radar semantic segmentation task compared to the state-of-the-art (SOTA) technique.
- Score: 13.035425992944543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Among various sensors for assisted and autonomous driving systems, automotive
radar has been considered as a robust and low-cost solution even in adverse
weather or lighting conditions. With the recent development of radar
technologies and open-sourced annotated data sets, semantic segmentation with
radar signals has become very promising. However, existing methods are either
computationally expensive or discard significant amounts of valuable
information from raw 3D radar signals by reducing them to 2D planes via
averaging. In this work, we introduce ERASE-Net, an Efficient RAdar
SEgmentation Network to segment the raw radar signals semantically. The core of
our approach is the novel detect-then-segment method for raw radar signals. It
first detects the center point of each object, then extracts a compact radar
signal representation, and finally performs semantic segmentation. We show that
our method can achieve superior performance on radar semantic segmentation task
compared to the state-of-the-art (SOTA) technique. Furthermore, our approach
requires up to 20x less computational resources. Finally, we show that the
proposed ERASE-Net can be compressed by 40% without significant loss in
performance, significantly more than the SOTA network, which makes it a more
promising candidate for practical automotive applications.
Related papers
- Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - ADCNet: Learning from Raw Radar Data via Distillation [3.519713957675842]
Radar-based systems are lower cost and more robust to adverse weather conditions than their LiDAR-based counterparts.
Recent research has focused on consuming the raw radar data, instead of the final radar point cloud.
We show that by bringing elements of the signal processing pipeline into our network and then pre-training on the signal processing task, we are able to achieve state of the art detection performance.
arXiv Detail & Related papers (2023-03-21T13:31:15Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW
Radar [26.56755178602111]
We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions.
We exploit the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors.
We present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.
arXiv Detail & Related papers (2020-04-02T11:40:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.