Data-driven RF Tomography via Cross-modal Sensing and Continual Learning
- URL: http://arxiv.org/abs/2508.11654v1
- Date: Mon, 04 Aug 2025 09:48:50 GMT
- Title: Data-driven RF Tomography via Cross-modal Sensing and Continual Learning
- Authors: Yang Zhao, Tao Wang, Said Elhadi,
- Abstract summary: We propose a data-driven radio frequency tomography (DRIFT) framework to reconstruct cross section images of underground root tubers.<n>First, we design a cross-modal sensing system with RF and visual sensors, and propose to train an RF tomography deep neural network (DNN) model.<n> Experimental results show that our approach achieves an average equivalent diameter error of 2.29 cm, 23.2% improvement upon the state-of-the-art approach.
- Score: 11.021948836550829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven radio frequency (RF) tomography has demonstrated significant potential for underground target detection, due to the penetrative nature of RF signals through soil. However, it is still challenging to achieve accurate and robust performance in dynamic environments. In this work, we propose a data-driven radio frequency tomography (DRIFT) framework with the following key components to reconstruct cross section images of underground root tubers, even with significant changes in RF signals. First, we design a cross-modal sensing system with RF and visual sensors, and propose to train an RF tomography deep neural network (DNN) model following the cross-modal learning approach. Then we propose to apply continual learning to automatically update the DNN model, once environment changes are detected in a dynamic environment. Experimental results show that our approach achieves an average equivalent diameter error of 2.29 cm, 23.2% improvement upon the state-of-the-art approach. Our DRIFT code and dataset are publicly available on https://github.com/Data-driven-RTI/DRIFT.
Related papers
- RadioGen3D: 3D Radio Map Generation via Adversarial Learning on Large-Scale Synthetic Data [62.63849426834315]
Radio maps are essential for efficient radio resource management in future 6G and low-altitude networks.<n>Deep learning (DL) techniques have emerged as an efficient alternative to conventional ray-tracing for radio map estimation.<n>We present the RadioGen3D framework to capture essential 3D signal propagation characteristics and antenna polarization effects.
arXiv Detail & Related papers (2026-02-21T07:50:05Z) - IrisNet: Infrared Image Status Awareness Meta Decoder for Infrared Small Targets Detection [92.56025546608699]
IrisNet is a novel meta-learned framework that adapts detection strategies to the input infrared image status.<n>Our approach establishes a dynamic mapping between infrared image features and entire decoder parameters.<n> Experiments on NUDT-SIRST, NUAA-SIRST, and IRSTD-1K datasets demonstrate the superiority of our IrisNet.
arXiv Detail & Related papers (2025-11-25T13:53:54Z) - Ivan-ISTD: Rethinking Cross-domain Heteroscedastic Noise Perturbations in Infrared Small Target Detection [53.689841037081834]
Ivan-ISTD is designed to address the dual challenges of cross-domain shift and heteroscedastic noise perturbations in ISTD.<n>Ivan-ISTD demonstrates excellent robustness in cross-domain scenarios.
arXiv Detail & Related papers (2025-10-14T07:48:31Z) - RINN: One Sample Radio Frequency Imaging based on Physics Informed Neural Network [9.812746486699323]
Radio frequency (RF) imaging technology is expected to bring new possibilities for embodied intelligence and multimodal sensing.<n>In this paper, we combine the ideas of PINN to design the RINN network, using physical constraints instead of true value comparison constraints.<n>Our numerical evaluation results show that RINN's imaging results based on phaseless data are good, with indicators such as RRMSE (0.11) performing similarly well.
arXiv Detail & Related papers (2025-04-19T15:19:12Z) - Graph-CNNs for RF Imaging: Learning the Electric Field Integral Equations [20.07924835384647]
We propose a Deep Neural Network (DNN) architecture to learn the corresponding inverse model.<n>A graph-attention backbone allows for the system geometry to be passed to the DNN, where residual convolutional layers extract features about the objects.<n>Our evaluations on two synthetic data sets of different characteristics showcase the performance gains of thee proposed advanced architecture.
arXiv Detail & Related papers (2025-03-18T17:16:40Z) - Few-Shot Radar Signal Recognition through Self-Supervised Learning and Radio Frequency Domain Adaptation [48.265859815346985]
Radar signal recognition plays a pivotal role in electronic warfare (EW)<n>Recent advances in deep learning have shown significant potential in improving radar signal recognition.<n>These methods fall short in EW scenarios where annotated radio frequency (RF) data are scarce or impractical to obtain.
arXiv Detail & Related papers (2025-01-07T01:35:56Z) - RF Challenge: The Data-Driven Radio Frequency Signal Separation Challenge [66.33067693672696]
We address the critical problem of interference rejection in radio-frequency (RF) signals using a data-driven approach that leverages deep-learning methods.<n>A primary contribution of this paper is the introduction of the RF Challenge, which is a publicly available, diverse RF signal dataset.
arXiv Detail & Related papers (2024-09-13T13:53:41Z) - RF-ULM: Ultrasound Localization Microscopy Learned from Radio-Frequency Wavefronts [7.652037892439504]
Delay-and-sum beamforming leads to irreversible reduction of Radio-Frequency (RF) channel data.
rich contextual information embedded within RF wavefronts offers great promise for guiding Deep Neural Networks (DNNs) in challenging localization scenarios.
We propose to directly localize scatterers in RF channel data using learned feature channel shuffling, non-maximum suppression, and a semi-global convolutional block.
arXiv Detail & Related papers (2023-10-02T18:41:23Z) - Radar Image Reconstruction from Raw ADC Data using Parametric
Variational Autoencoder with Domain Adaptation [0.0]
We propose a parametrically constrained variational autoencoder, capable of generating the clustered and localized target detections on the range-angle image.
To circumvent the problem of training the proposed neural network on all possible scenarios using real radar data, we propose domain adaptation strategies.
arXiv Detail & Related papers (2022-05-30T16:17:36Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - Three-Way Deep Neural Network for Radio Frequency Map Generation and
Source Localization [67.93423427193055]
Monitoring wireless spectrum over spatial, temporal, and frequency domains will become a critical feature in beyond-5G and 6G communication technologies.
In this paper, we present a Generative Adversarial Network (GAN) machine learning model to interpolate irregularly distributed measurements across the spatial domain.
arXiv Detail & Related papers (2021-11-23T22:25:10Z) - RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human
Activity Recognition [9.135311655929366]
Device-free (or contactless) sensing is more sensitive to environment changes than device-based (or wearable) sensing.
Existing solutions to RF-HAR entail a laborious data collection process for adapting to new environments.
We propose RF-Net as a meta-learning based approach to one-shot RF-HAR; it reduces the labeling efforts for environment adaptation to the minimum level.
arXiv Detail & Related papers (2021-10-29T01:58:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.