Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
- URL: http://arxiv.org/abs/2410.19801v2
- Date: Sun, 08 Dec 2024 20:46:17 GMT
- Title: Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals
- Authors: Daqian Bao, Alex Saad-Falcon, Justin Romberg,
- Abstract summary: Implicit Neural Representations (INRs) offer compact and continuous representations with minimal radar data.<n>RIFT consists of two components: a classical forward model for radar and an INR based scene representation.<n>With only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
- Score: 9.170594803531866
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data acquisition in array signal processing (ASP) is costly because achieving high angular and range resolutions necessitates large antenna apertures and wide frequency bandwidths, respectively. The data requirements for ASP problems grow multiplicatively with the number of viewpoints and frequencies, significantly increasing the burden of data collection, even for simulation. Implicit Neural Representations (INRs) -- neural network-based models of 3D objects and scenes -- offer compact and continuous representations with minimal radar data. They can interpolate to unseen viewpoints and potentially address the sampling cost in ASP problems. In this work, we select Synthetic Aperture Radar (SAR) as a case from ASP and propose Radon Implicit Field Transform (RIFT). RIFT consists of two components: a classical forward model for radar (Generalized Radon Transform, GRT), and an INR based scene representation learned from radar signals. This method can be extended to other ASP problems by replacing the GRT with appropriate algorithms corresponding to different data modalities. In our experiments, we first synthesize radar data using the GRT. We then train the INR model on this synthetic data by minimizing the reconstruction error of the radar signal. After training, we render the scene using the trained INR and evaluate our scene representation against the ground truth scene. Due to the lack of existing benchmarks, we introduce two main new error metrics: phase-Root Mean Square Error (p-RMSE) for radar signal interpolation, and magnitude-Structural Similarity Index measure(m-SSIM) for scene reconstruction. These metrics adapt traditional error measures to account for the complex nature of radar signals. Compared to traditional scene models in radar signal processing, with only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
Related papers
- DRO: Doppler-Aware Direct Radar Odometry [11.042292216861762]
A renaissance in radar-based sensing for mobile robotic applications is underway.
We propose a novel SE(2) odometry approach for spinning frequency-modulated continuous-wave radars.
Our method has been validated on over 250km of on-road data sourced from public datasets.
arXiv Detail & Related papers (2025-04-29T01:20:30Z) - LaRI: Layered Ray Intersections for Single-view 3D Geometric Reasoning [75.9814389360821]
layered ray intersections (LaRI) is a new method for unseen geometry reasoning from a single image.
Benefiting from the compact and layered representation, LaRI enables complete, efficient, and view-aligned geometric reasoning.
We build a complete training data generation pipeline for synthetic and real-world data, including 3D objects and scenes.
arXiv Detail & Related papers (2025-04-25T15:31:29Z) - Radar Signal Recognition through Self-Supervised Learning and Domain Adaptation [48.265859815346985]
We introduce a self-supervised learning (SSL) method to enhance radar signal recognition in environments with limited RF samples and labels.
Specifically, we investigate pre-training masked autoencoders (MAE) on baseband in-phase and quadrature (I/Q) signals from various RF domains.
Results show that our lightweight self-supervised ResNet model with domain adaptation achieves up to a 17.5% improvement in 1-shot classification accuracy.
arXiv Detail & Related papers (2025-01-07T01:35:56Z) - SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data [5.344444942640663]
Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information.
We introduce an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns.
Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation.
arXiv Detail & Related papers (2024-06-15T11:26:10Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Thermal-Infrared Remote Target Detection System for Maritime Rescue
based on Data Augmentation with 3D Synthetic Data [4.66313002591741]
This paper proposes a thermal-infrared (TIR) remote target detection system for maritime rescue using deep learning and data augmentation.
To address dataset scarcity and improve model robustness, a synthetic dataset from a 3D game (ARMA3) to augment the data is collected.
The proposed segmentation model surpasses the performance of state-of-the-art segmentation methods.
arXiv Detail & Related papers (2023-10-31T12:37:49Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Generation of Realistic Synthetic Raw Radar Data for Automated Driving
Applications using Generative Adversarial Networks [0.0]
This work proposes a faster method for FMCW radar simulation capable of generating synthetic raw radar data using generative adversarial networks (GAN)
The code and pre-trained weights are open-source and available on GitHub.
Results have shown that the data is realistic in terms of coherent radar reflections of the motorcycle and background noise based on the comparison of chirps, the RA maps and the object detection results.
arXiv Detail & Related papers (2023-08-04T17:44:27Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - A Deep Learning Approach for SAR Tomographic Imaging of Forested Areas [10.477070348391079]
We show that light-weight neural networks can be trained to perform the tomographic inversion with a single feed-forward pass.
We train our encoder-decoder network using simulated data and validate our technique on real L-band and P-band data.
arXiv Detail & Related papers (2023-01-20T14:34:03Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Is Attention All NeRF Needs? [103.51023982774599]
Generalizable NeRF Transformer (GNT) is a pure, unified transformer-based architecture that efficiently reconstructs Neural Radiance Fields (NeRFs) on the fly from source views.
GNT achieves generalizable neural scene representation and rendering, by encapsulating two transformer-based stages.
arXiv Detail & Related papers (2022-07-27T05:09:54Z) - Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation [0.0]
We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
arXiv Detail & Related papers (2022-05-30T16:17:36Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.