Super-Resolution Radar Imaging with Sparse Arrays Using a Deep Neural
Network Trained with Enhanced Virtual Data
- URL: http://arxiv.org/abs/2306.09839v1
- Date: Fri, 16 Jun 2023 13:37:47 GMT
- Title: Super-Resolution Radar Imaging with Sparse Arrays Using a Deep Neural
Network Trained with Enhanced Virtual Data
- Authors: Christian Schuessler, Marcel Hoffmann, Martin Vossiek
- Abstract summary: This paper introduces a method based on a deep neural network (DNN) that is perfectly capable of processing radar data from extremely thinned radar apertures.
The proposed DNN processing can provide both aliasing-free radar imaging and super-resolution.
It simultaneously delivers nearly the same resolution and image quality as would be achieved with a fully occupied array.
- Score: 0.4640835690336652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a method based on a deep neural network (DNN) that is
perfectly capable of processing radar data from extremely thinned radar
apertures. The proposed DNN processing can provide both aliasing-free radar
imaging and super-resolution. The results are validated by measuring the
detection performance on realistic simulation data and by evaluating the
Point-Spread-function (PSF) and the target-separation performance on measured
point-like targets. Also, a qualitative evaluation of a typical automotive
scene is conducted. It is shown that this approach can outperform
state-of-the-art subspace algorithms and also other existing machine learning
solutions. The presented results suggest that machine learning approaches
trained with sufficiently sophisticated virtual input data are a very promising
alternative to compressed sensing and subspace approaches in radar signal
processing. The key to this performance is that the DNN is trained using
realistic simulation data that perfectly mimic a given sparse antenna radar
array hardware as the input. As ground truth, ultra-high resolution data from
an enhanced virtual radar are simulated. Contrary to other work, the DNN
utilizes the complete radar cube and not only the antenna channel information
at certain range-Doppler detections. After training, the proposed DNN is
capable of sidelobe- and ambiguity-free imaging. It simultaneously delivers
nearly the same resolution and image quality as would be achieved with a fully
occupied array.
Related papers
- Redefining Automotive Radar Imaging: A Domain-Informed 1D Deep Learning Approach for High-Resolution and Efficient Performance [6.784861785632841]
Our study redefines radar imaging super-resolution as a one-dimensional (1D) signal super-resolution spectra estimation problem.
Our tailored deep learning network for automotive radar imaging exhibits remarkable scalability, parameter efficiency and fast inference speed.
Our SR-SPECNet sets a new benchmark in producing high-resolution radar range-azimuth images.
arXiv Detail & Related papers (2024-06-11T16:07:08Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - DeepHybrid: Deep Learning on Automotive Radar Spectra and Reflections
for Object Classification [0.5669790037378094]
We propose a method that combines classical radar signal processing and Deep Learning algorithms.
The proposed method can be used for example to improve automatic emergency braking or collision avoidance systems.
arXiv Detail & Related papers (2022-02-17T08:45:11Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - There and Back Again: Learning to Simulate Radar Data for Real-World
Applications [21.995474023869388]
We learn a radar sensor model capable of synthesising faithful radar observations based on simulated elevation maps.
We adopt an adversarial approach to learning a forward sensor model from unaligned radar examples.
We demonstrate the efficacy of our approach by evaluating a down-stream segmentation model trained purely on simulated data in a real-world deployment.
arXiv Detail & Related papers (2020-11-29T15:49:23Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.