RadioGen3D: 3D Radio Map Generation via Adversarial Learning on Large-Scale Synthetic Data
- URL: http://arxiv.org/abs/2602.18744v1
- Date: Sat, 21 Feb 2026 07:50:05 GMT
- Title: RadioGen3D: 3D Radio Map Generation via Adversarial Learning on Large-Scale Synthetic Data
- Authors: Junshen Chen, Angzi Xu, Zezhong Zhang, Shiyao Zhang, Junting Chen, Shuguang Cui,
- Abstract summary: Radio maps are essential for efficient radio resource management in future 6G and low-altitude networks.<n>Deep learning (DL) techniques have emerged as an efficient alternative to conventional ray-tracing for radio map estimation.<n>We present the RadioGen3D framework to capture essential 3D signal propagation characteristics and antenna polarization effects.
- Score: 62.63849426834315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radio maps are essential for efficient radio resource management in future 6G and low-altitude networks. While deep learning (DL) techniques have emerged as an efficient alternative to conventional ray-tracing for radio map estimation (RME), most existing DL approaches are confined to 2D near-ground scenarios. They often fail to capture essential 3D signal propagation characteristics and antenna polarization effects, primarily due to the scarcity of 3D data and training challenges. To address these limitations, we present the RadioGen3D framework. First, we propose an efficient data synthesis method to generate high-quality 3D radio map data. By establishing a parametric target model that captures 2D ray-tracing and 3D channel fading characteristics, we derive realistic coefficient combinations from minimal real measurements, enabling the construction of a large-scale synthetic dataset, Radio3DMix. Utilizing this dataset, we propose a 3D model training scheme based on a conditional generative adversarial network (cGAN), yielding a 3D U-Net capable of accurate RME under diverse input feature combinations. Experimental results demonstrate that RadioGen3D surpasses all baselines in both estimation accuracy and speed. Furthermore, fine-tuning experiments verify its strong generalization capability via successful knowledge transfer.
Related papers
- Bridging Visual and Wireless Sensing: A Unified Radiation Field for 3D Radio Map Construction [14.26926951448715]
Next-generation wireless networks require high-fidelity environmental intelligence.<n>3D radio maps have emerged as a critical tool for this purpose.<n>We propose URF-GS, a unified radio-optical radiation field representation framework.
arXiv Detail & Related papers (2026-01-27T05:35:50Z) - TriCLIP-3D: A Unified Parameter-Efficient Framework for Tri-Modal 3D Visual Grounding based on CLIP [52.79100775328595]
3D visual grounding allows an embodied agent to understand visual information in real-world 3D environments based on human instructions.<n>Existing 3D visual grounding methods rely on separate encoders for different modalities.<n>We propose a unified 2D pre-trained multi-modal network to process all three modalities.
arXiv Detail & Related papers (2025-07-20T10:28:06Z) - RadioDiff-3D: A 3D$\times$3D Radio Map Dataset and Generative Diffusion Based Benchmark for 6G Environment-Aware Communication [76.6171399066216]
UrbanRadio3D is a large-scale, high-resolution 3D RM dataset constructed via ray tracing in realistic urban environments.<n>RadioDiff-3D is a diffusion-model-based generative framework utilizing the 3D convolutional architecture.<n>This work provides a foundational dataset and benchmark for future research in 3D environment-aware communication.
arXiv Detail & Related papers (2025-07-16T11:54:08Z) - Bridging Simulation and Reality: A 3D Clustering-Based Deep Learning Model for UAV-Based RF Source Localization [0.0]
Unmanned aerial vehicles (UAVs) offer significant advantages for RF source localization over terrestrial methods.<n>Recent advancements in deep learning (DL) have further enhanced localization accuracy, particularly for outdoor scenarios.<n>We propose the 3D Cluster-Based RealAdaptRNet, a DL-based method leveraging 3D clustering-based feature extraction for robust localization.
arXiv Detail & Related papers (2025-02-02T05:48:44Z) - A Lesson in Splats: Teacher-Guided Diffusion for 3D Gaussian Splats Generation with 2D Supervision [65.33043028101471]
We present a novel framework for training 3D image-conditioned diffusion models using only 2D supervision.<n>Most existing 3D generative models rely on full 3D supervision, which is impractical due to the scarcity of large-scale 3D datasets.
arXiv Detail & Related papers (2024-12-01T00:29:57Z) - Deep Learning-based Cross-modal Reconstruction of Vehicle Target from Sparse 3D SAR Image [6.499547636078961]
We introduce cross-modal learning and propose a Cross-Modal 3D-SAR Reconstruction Network (CMAR-Net) for enhancing sparse 3D SAR images of vehicle targets by fusing optical information.<n>CMAR-Net achieves efficient training and reconstructs sparse 3D SAR images, which are derived from highly sparse-aspect observations, into visually structured 3D vehicle images.
arXiv Detail & Related papers (2024-06-06T15:18:59Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - RADU: Ray-Aligned Depth Update Convolutions for ToF Data Denoising [8.142947808507369]
Time-of-Flight (ToF) cameras are subject to high levels of noise and distortions due to Multi-Path-Interference (MPI)
We propose an iterative denoising approach operating in 3D space, that is designed to learn on 2.5D data by enabling 3D point convolutions to correct the points' positions along the view direction.
We demonstrate that our method is able to outperform SOTA methods on several datasets, including two real world datasets and a new large-scale synthetic data set introduced in this paper.
arXiv Detail & Related papers (2021-11-30T15:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.