Coherent, super resolved radar beamforming using self-supervised
learning
- URL: http://arxiv.org/abs/2106.13085v1
- Date: Mon, 21 Jun 2021 16:59:55 GMT
- Title: Coherent, super resolved radar beamforming using self-supervised
learning
- Authors: Itai Orr, Moshik Cohen, Harel Damari, Meir Halachmi, Zeev Zalevsky
- Abstract summary: Radar signal Reconstruction using Self Supervision (R2-S2) significantly improves the angular resolution of a given radar array without increasing the number of physical channels.
R2-S2 is a family of algorithms which use a Deep Neural Network (DNN) with complex range-Doppler radar data as input and trained in a self-supervised method.
Improvement of 4x in angular resolution was demonstrated using a real-world dataset collected in urban and highway environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: High resolution automotive radar sensors are required in order to meet the
high bar of autonomous vehicles needs and regulations. However, current radar
systems are limited in their angular resolution causing a technological gap. An
industry and academic trend to improve angular resolution by increasing the
number of physical channels, also increases system complexity, requires
sensitive calibration processes, lowers robustness to hardware malfunctions and
drives higher costs. We offer an alternative approach, named Radar signal
Reconstruction using Self Supervision (R2-S2), which significantly improves the
angular resolution of a given radar array without increasing the number of
physical channels. R2-S2 is a family of algorithms which use a Deep Neural
Network (DNN) with complex range-Doppler radar data as input and trained in a
self-supervised method using a loss function which operates in multiple data
representation spaces. Improvement of 4x in angular resolution was demonstrated
using a real-world dataset collected in urban and highway environments during
clear and rainy weather conditions.
Related papers
- Redefining Automotive Radar Imaging: A Domain-Informed 1D Deep Learning Approach for High-Resolution and Efficient Performance [6.784861785632841]
Our study redefines radar imaging super-resolution as a one-dimensional (1D) signal super-resolution spectra estimation problem.
Our tailored deep learning network for automotive radar imaging exhibits remarkable scalability, parameter efficiency and fast inference speed.
Our SR-SPECNet sets a new benchmark in producing high-resolution radar range-azimuth images.
arXiv Detail & Related papers (2024-06-11T16:07:08Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - ADCNet: Learning from Raw Radar Data via Distillation [3.519713957675842]
Radar-based systems are lower cost and more robust to adverse weather conditions than their LiDAR-based counterparts.
Recent research has focused on consuming the raw radar data, instead of the final radar point cloud.
We show that by bringing elements of the signal processing pipeline into our network and then pre-training on the signal processing task, we are able to achieve state of the art detection performance.
arXiv Detail & Related papers (2023-03-21T13:31:15Z) - Automotive RADAR sub-sampling via object detection networks: Leveraging
prior signal information [18.462990836437626]
Automotive radar has increasingly attracted attention due to growing interest in autonomous driving technologies.
We present a novel adaptive radar sub-sampling algorithm designed to identify regions that require more detailed/accurate reconstruction based on prior environmental conditions' knowledge.
arXiv Detail & Related papers (2023-02-21T05:32:28Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.