Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
- URL: http://arxiv.org/abs/2410.19801v2
- Date: Sun, 08 Dec 2024 20:46:17 GMT
- Title: Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
- Authors: Daqian Bao, Alex Saad-Falcon, Justin Romberg,
- Abstract summary: Implicit Neural Representations (INRs) offer compact and continuous representations with minimal radar data.
RIFT consists of two components: a classical forward model for radar and an INR based scene representation.
With only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
- Score: 9.170594803531866
- License:
- Abstract: Data acquisition in array signal processing (ASP) is costly because achieving high angular and range resolutions necessitates large antenna apertures and wide frequency bandwidths, respectively. The data requirements for ASP problems grow multiplicatively with the number of viewpoints and frequencies, significantly increasing the burden of data collection, even for simulation. Implicit Neural Representations (INRs) -- neural network-based models of 3D objects and scenes -- offer compact and continuous representations with minimal radar data. They can interpolate to unseen viewpoints and potentially address the sampling cost in ASP problems. In this work, we select Synthetic Aperture Radar (SAR) as a case from ASP and propose Radon Implicit Field Transform (RIFT). RIFT consists of two components: a classical forward model for radar (Generalized Radon Transform, GRT), and an INR based scene representation learned from radar signals. This method can be extended to other ASP problems by replacing the GRT with appropriate algorithms corresponding to different data modalities. In our experiments, we first synthesize radar data using the GRT. We then train the INR model on this synthetic data by minimizing the reconstruction error of the radar signal. After training, we render the scene using the trained INR and evaluate our scene representation against the ground truth scene. Due to the lack of existing benchmarks, we introduce two main new error metrics: phase-Root Mean Square Error (p-RMSE) for radar signal interpolation, and magnitude-Structural Similarity Index measure(m-SSIM) for scene reconstruction. These metrics adapt traditional error measures to account for the complex nature of radar signals. Compared to traditional scene models in radar signal processing, with only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
Related papers
- Radar Signal Recognition through Self-Supervised Learning and Domain Adaptation [48.265859815346985]
We introduce a self-supervised learning (SSL) method to enhance radar signal recognition in environments with limited RF samples and labels.
Specifically, we investigate pre-training masked autoencoders (MAE) on baseband in-phase and quadrature (I/Q) signals from various RF domains.
Results show that our lightweight self-supervised ResNet model with domain adaptation achieves up to a 17.5% improvement in 1-shot classification accuracy.
arXiv Detail & Related papers (2025-01-07T01:35:56Z) - SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications [20.589332431911842]
The RASPNet dataset exceeds 16 TB in size and comprises 100 realistic scenarios compiled over a variety of topographies and land types from across the contiguous United States.
RASPNet consists of 10,000 clutter realizations from an airborne radar setting, which can be used to benchmark radar and complex-valued learning algorithms.
We outline its construction, organization, and several applications, including a transfer learning example to demonstrate how RASPNet can be used for realistic adaptive radar processing scenarios.
arXiv Detail & Related papers (2024-06-14T00:07:52Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Generation of Realistic Synthetic Raw Radar Data for Automated Driving
Applications using Generative Adversarial Networks [0.0]
This work proposes a faster method for FMCW radar simulation capable of generating synthetic raw radar data using generative adversarial networks (GAN)
The code and pre-trained weights are open-source and available on GitHub.
Results have shown that the data is realistic in terms of coherent radar reflections of the motorcycle and background noise based on the comparison of chirps, the RA maps and the object detection results.
arXiv Detail & Related papers (2023-08-04T17:44:27Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Is Attention All NeRF Needs? [103.51023982774599]
Generalizable NeRF Transformer (GNT) is a pure, unified transformer-based architecture that efficiently reconstructs Neural Radiance Fields (NeRFs) on the fly from source views.
GNT achieves generalizable neural scene representation and rendering, by encapsulating two transformer-based stages.
arXiv Detail & Related papers (2022-07-27T05:09:54Z) - Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation [0.0]
We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
arXiv Detail & Related papers (2022-05-30T16:17:36Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.