Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation
- URL: http://arxiv.org/abs/2207.06379v1
- Date: Mon, 30 May 2022 16:17:36 GMT
- Title: Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation
- Authors: Michael Stephan (1 and 2), Thomas Stadelmayer (1 and 2), Avik Santra
(2), Georg Fischer (1), Robert Weigel (1), Fabian Lurz (1) ((1)
Friedrich-Alexander-University Erlangen-Nuremberg, (2) Infineon Technologies
AG)
- Abstract summary: We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a parametric variational autoencoder-based human target
detection and localization framework working directly with the raw
analog-to-digital converter data from the frequency modulated continous wave
radar. We propose a parametrically constrained variational autoencoder, with
residual and skip connections, capable of generating the clustered and
localized target detections on the range-angle image. Furthermore, to
circumvent the problem of training the proposed neural network on all possible
scenarios using real radar data, we propose domain adaptation strategies
whereby we first train the neural network using ray tracing based model data
and then adapt the network to work on real sensor data. This strategy ensures
better generalization and scalability of the proposed neural network even
though it is trained with limited radar data. We demonstrate the superior
detection and localization performance of our proposed solution compared to the
conventional signal processing pipeline and earlier state-of-art deep U-Net
architecture with range-doppler images as inputs
Related papers
- Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarGNN: Transformation Invariant Graph Neural Network for Radar-based
Perception [0.0]
A novel graph neural network is proposed that does not just use the information of the points themselves but also the relationships between them.
The model is designed to consider both point features and point-pair features, embedded in the edges of the graph.
The RadarGNN model outperforms all previous methods on the RadarScenes dataset.
arXiv Detail & Related papers (2023-04-13T13:57:21Z) - Subspace Perturbation Analysis for Data-Driven Radar Target Localization [20.34399283905663]
We use subspace analysis to benchmark radar target localization accuracy across mismatched scenarios.
We generate comprehensive datasets by randomly placing targets of variable strengths in mismatched constrained areas.
We estimate target locations from these heatmap tensors using a convolutional neural network.
arXiv Detail & Related papers (2023-03-14T21:22:26Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks [18.50309014013637]
This paper presents a data-driven approach to improve radar target localization post adaptive radar detection.
We produce heatmap tensors from the radar returns, in range, azimuth [and Doppler], of the normalized adaptive matched filter (NAMF) test statistic.
We then train a regression convolutional neural network (CNN) to estimate target locations from these heatmap tensors.
arXiv Detail & Related papers (2022-09-07T02:23:40Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.