RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications
- URL: http://arxiv.org/abs/2406.09638v2
- Date: Fri, 14 Feb 2025 17:49:54 GMT
- Title: RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications
- Authors: Shyam Venkatasubramanian, Bosung Kang, Ali Pezeshki, Muralidhar Rangaswamy, Vahid Tarokh,
- Abstract summary: The RASPNet dataset exceeds 16 TB in size and comprises 100 realistic scenarios compiled over a variety of topographies and land types from across the contiguous United States.
RASPNet consists of 10,000 clutter realizations from an airborne radar setting, which can be used to benchmark radar and complex-valued learning algorithms.
We outline its construction, organization, and several applications, including a transfer learning example to demonstrate how RASPNet can be used for realistic adaptive radar processing scenarios.
- Score: 20.589332431911842
- License:
- Abstract: We present a large-scale dataset for radar adaptive signal processing (RASP) applications to support the development of data-driven models within the adaptive radar community. The dataset, RASPNet, exceeds 16 TB in size and comprises 100 realistic scenarios compiled over a variety of topographies and land types from across the contiguous United States. For each scenario, RASPNet consists of 10,000 clutter realizations from an airborne radar setting, which can be used to benchmark radar and complex-valued learning algorithms. RASPNet intends to fill a prominent gap in the availability of a large-scale, realistic dataset that standardizes the evaluation of adaptive radar processing techniques and complex-valued neural networks. We outline its construction, organization, and several applications, including a transfer learning example to demonstrate how RASPNet can be used for realistic adaptive radar processing scenarios.
Related papers
- Radon Implicit Field Transform (RIFT): Learning Scenes from Radar Signals [9.170594803531866]
Implicit Neural Representations (INRs) offer compact and continuous representations with minimal radar data.
RIFT consists of two components: a classical forward model for radar and an INR based scene representation.
With only 10% data footprint, our RIFT model achieves up to 188% improvement in scene reconstruction.
arXiv Detail & Related papers (2024-10-16T16:59:37Z) - Radio Map Estimation -- An Open Dataset with Directive Transmitter
Antennas and Initial Experiments [49.61405888107356]
We release a dataset of simulated path loss radio maps together with realistic city maps from real-world locations and aerial images from open datasources.
Initial experiments regarding model architectures, input feature design and estimation of radio maps from aerial images are presented.
arXiv Detail & Related papers (2024-01-12T14:56:45Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Scaling Data Generation in Vision-and-Language Navigation [116.95534559103788]
We propose an effective paradigm for generating large-scale data for learning.
We apply 1200+ photo-realistic environments from HM3D and Gibson datasets and synthesizes 4.9 million instruction trajectory pairs.
Thanks to our large-scale dataset, the performance of an existing agent can be pushed up (+11% absolute with regard to previous SoTA) to a significantly new best of 80% single-run success rate on the R2R test split by simple imitation learning.
arXiv Detail & Related papers (2023-07-28T16:03:28Z) - Super-Resolution Radar Imaging with Sparse Arrays Using a Deep Neural
Network Trained with Enhanced Virtual Data [0.4640835690336652]
This paper introduces a method based on a deep neural network (DNN) that is perfectly capable of processing radar data from extremely thinned radar apertures.
The proposed DNN processing can provide both aliasing-free radar imaging and super-resolution.
It simultaneously delivers nearly the same resolution and image quality as would be achieved with a fully occupied array.
arXiv Detail & Related papers (2023-06-16T13:37:47Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - Large-Scale Topological Radar Localization Using Learned Descriptors [15.662820454886202]
We present a simple yet efficient deep network architecture to compute a rotationally invariant discriminative global descriptor from a radar scan image.
The performance and generalization ability of the proposed method is experimentally evaluated on two large scale driving datasets.
arXiv Detail & Related papers (2021-10-06T21:57:23Z) - Real-time Outdoor Localization Using Radio Maps: A Deep Learning
Approach [59.17191114000146]
LocUNet: A convolutional, end-to-end trained neural network (NN) for the localization task.
We show that LocUNet can localize users with state-of-the-art accuracy and enjoys high robustness to inaccuracies in the estimations of radio maps.
arXiv Detail & Related papers (2021-06-23T17:27:04Z) - There and Back Again: Learning to Simulate Radar Data for Real-World
Applications [21.995474023869388]
We learn a radar sensor model capable of synthesising faithful radar observations based on simulated elevation maps.
We adopt an adversarial approach to learning a forward sensor model from unaligned radar examples.
We demonstrate the efficacy of our approach by evaluating a down-stream segmentation model trained purely on simulated data in a real-world deployment.
arXiv Detail & Related papers (2020-11-29T15:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.