Rasterizing Wireless Radiance Field via Deformable 2D Gaussian Splatting
- URL: http://arxiv.org/abs/2506.12787v2
- Date: Wed, 18 Jun 2025 12:41:12 GMT
- Title: Rasterizing Wireless Radiance Field via Deformable 2D Gaussian Splatting
- Authors: Mufan Liu, Cixiao Zhang, Qi Yang, Yujie Cao, Yiling Xu, Yin Xu, Shu Sun, Mingzeng Dai, Yunfeng Guan,
- Abstract summary: Modeling wireless radiance field (WRF) is fundamental to modern communication systems.<n>We propose SwiftWRF, a deformable 2D splatting framework that synthesizes WRF spectra at arbitrary positions.<n>Experiments on both real-world and synthetic indoor scenes demonstrate that SwiftWRF can reconstruct WRF up to 500x faster than existing state-of-the-art methods.
- Score: 10.200300617390013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the wireless radiance field (WRF) is fundamental to modern communication systems, enabling key tasks such as localization, sensing, and channel estimation. Traditional approaches, which rely on empirical formulas or physical simulations, often suffer from limited accuracy or require strong scene priors. Recent neural radiance field (NeRF-based) methods improve reconstruction fidelity through differentiable volumetric rendering, but their reliance on computationally expensive multilayer perceptron (MLP) queries hinders real-time deployment. To overcome these challenges, we introduce Gaussian splatting (GS) to the wireless domain, leveraging its efficiency in modeling optical radiance fields to enable compact and accurate WRF reconstruction. Specifically, we propose SwiftWRF, a deformable 2D Gaussian splatting framework that synthesizes WRF spectra at arbitrary positions under single-sided transceiver mobility. SwiftWRF employs CUDA-accelerated rasterization to render spectra at over 100000 fps and uses a lightweight MLP to model the deformation of 2D Gaussians, effectively capturing mobility-induced WRF variations. In addition to novel spectrum synthesis, the efficacy of SwiftWRF is further underscored in its applications in angle-of-arrival (AoA) and received signal strength indicator (RSSI) prediction. Experiments conducted on both real-world and synthetic indoor scenes demonstrate that SwiftWRF can reconstruct WRF spectra up to 500x faster than existing state-of-the-art methods, while significantly enhancing its signal quality. The project page is https://evan-sudo.github.io/swiftwrf/.
Related papers
- FADPNet: Frequency-Aware Dual-Path Network for Face Super-Resolution [70.61549422952193]
Face super-resolution (FSR) under limited computational costs remains an open problem.<n>Existing approaches typically treat all facial pixels equally, resulting in suboptimal allocation of computational resources.<n>We propose FADPNet, a Frequency-Aware Dual-Path Network that decomposes facial features into low- and high-frequency components.
arXiv Detail & Related papers (2025-06-17T02:33:42Z) - SpectrumFM: A Foundation Model for Intelligent Spectrum Management [99.08036558911242]
Existing intelligent spectrum management methods, typically based on small-scale models, suffer from notable limitations in recognition accuracy, convergence speed, and generalization.<n>This paper proposes a novel spectrum foundation model, termed SpectrumFM, establishing a new paradigm for spectrum management.<n>Experiments demonstrate that SpectrumFM achieves superior performance in terms of accuracy, robustness, adaptability, few-shot learning efficiency, and convergence speed.
arXiv Detail & Related papers (2025-05-02T04:06:39Z) - SpINR: Neural Volumetric Reconstruction for FMCW Radars [0.15193212081459279]
We introduce SpINR, a novel framework for volumetric reconstruction using Frequency-Modulated Continuous-Wave (FMCW) radar data.<n>We demonstrate that SpINR significantly outperforms classical backprojection methods and existing learning-based approaches.
arXiv Detail & Related papers (2025-03-30T04:44:57Z) - STAF: Sinusoidal Trainable Activation Functions for Implicit Neural Representation [7.2888019138115245]
Implicit Neural Representations (INRs) have emerged as a powerful framework for modeling continuous signals.<n>The spectral bias of ReLU-based networks is a well-established limitation, restricting their ability to capture fine-grained details in target signals.<n>We introduce Sinusoidal Trainable Functions Activation (STAF)<n>STAF inherently modulates its frequency components, allowing for self-adaptive spectral learning.
arXiv Detail & Related papers (2025-02-02T18:29:33Z) - Neural Representation for Wireless Radiation Field Reconstruction: A 3D Gaussian Splatting Approach [8.644949917126755]
We present WRF-GS, a novel framework for channel modeling based on wireless radiation field (WRF) reconstruction.<n>We propose WRF-GS+, an enhanced framework that integrates electromagnetic wave physics into the neural network design.
arXiv Detail & Related papers (2024-12-06T07:56:14Z) - Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
We present LiDAR-GS, a real-time, high-fidelity re-simulation of LiDAR scans in public urban road scenes.<n>The method achieves state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Spatial Annealing for Efficient Few-shot Neural Rendering [73.49548565633123]
We introduce an accurate and efficient few-shot neural rendering method named textbfSpatial textbfAnnealing regularized textbfNeRF (textbfSANeRF)<n>By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Re-ReND: Real-time Rendering of NeRFs across Devices [56.081995086924216]
Re-ReND is designed to achieve real-time performance by converting the NeRF into a representation that can be efficiently processed by standard graphics pipelines.
We find that Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the state-of-the-art without perceptible losses in quality.
arXiv Detail & Related papers (2023-03-15T15:59:41Z) - Faster Region-Based CNN Spectrum Sensing and Signal Identification in
Cluttered RF Environments [0.7734726150561088]
We optimize a faster region-based convolutional neural network (FRCNN) for 1-dimensional (1D) signal processing and electromagnetic spectrum sensing.
Results show that our method has better localization performance, and is faster than the 2D equivalent.
arXiv Detail & Related papers (2023-02-20T09:35:13Z) - Fourier Space Losses for Efficient Perceptual Image Super-Resolution [131.50099891772598]
We show that it is possible to improve the performance of a recently introduced efficient generator architecture solely with the application of our proposed loss functions.
We show that our losses' direct emphasis on the frequencies in Fourier-space significantly boosts the perceptual image quality.
The trained generator achieves comparable results with and is 2.4x and 48x faster than state-of-the-art perceptual SR methods RankSRGAN and SRFlow respectively.
arXiv Detail & Related papers (2021-06-01T20:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.