Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation
- URL: http://arxiv.org/abs/2105.00929v1
- Date: Thu, 29 Apr 2021 10:06:29 GMT
- Title: Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation
- Authors: Alexander Fuchs, Johanna Rock, Mate Toth, Paul Meissner, Franz
Pernkopf
- Abstract summary: We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
- Score: 73.0103413636673
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous driving highly depends on capable sensors to perceive the
environment and to deliver reliable information to the vehicles' control
systems. To increase its robustness, a diversified set of sensors is used,
including radar sensors. Radar is a vital contribution of sensory information,
providing high resolution range as well as velocity measurements. The increased
use of radar sensors in road traffic introduces new challenges. As the so far
unregulated frequency band becomes increasingly crowded, radar sensors suffer
from mutual interference between multiple radar sensors. This interference must
be mitigated in order to ensure a high and consistent detection sensitivity. In
this paper, we propose the use of Complex-Valued Convolutional Neural Networks
(CVCNNs) to address the issue of mutual interference between radar sensors. We
extend previously developed methods to the complex domain in order to process
radar data according to its physical characteristics. This not only increases
data efficiency, but also improves the conservation of phase information during
filtering, which is crucial for further processing, such as angle estimation.
Our experiments show, that the use of CVCNNs increases data efficiency, speeds
up network training and substantially improves the conservation of phase
information during interference removal.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Coherent, super resolved radar beamforming using self-supervised
learning [0.0]
Radar signal Reconstruction using Self Supervision (R2-S2) significantly improves the angular resolution of a given radar array without increasing the number of physical channels.
R2-S2 is a family of algorithms which use a Deep Neural Network (DNN) with complex range-Doppler radar data as input and trained in a self-supervised method.
Improvement of 4x in angular resolution was demonstrated using a real-world dataset collected in urban and highway environments.
arXiv Detail & Related papers (2021-06-21T16:59:55Z) - Quantized Neural Networks for Radar Interference Mitigation [14.540226579203207]
CNN-based approaches for denoising and interference mitigation yield promising results for radar processing.
We investigate quantization techniques for CNN-based denoising and interference mitigation of radar signals.
arXiv Detail & Related papers (2020-11-25T13:18:06Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.