Transform-Invariant Generative Ray Path Sampling for Efficient Radio Propagation Modeling
- URL: http://arxiv.org/abs/2603.01655v1
- Date: Mon, 02 Mar 2026 09:37:34 GMT
- Title: Transform-Invariant Generative Ray Path Sampling for Efficient Radio Propagation Modeling
- Authors: Jérome Eertmans, Enrico M. Vitucci, Vittorio Degli-Esposti, Nicola Di Cicco, Laurent Jacques, Claude Oestges,
- Abstract summary: Ray tracing has become a standard for accurate radio propagation modeling, but suffers from computational complexity.<n>We propose a comprehensive machine-learning-assisted framework that replaces exhaustive path searching with intelligent sampling via Generative Flow Networks.
- Score: 5.567300300236187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ray tracing has become a standard for accurate radio propagation modeling, but suffers from exponential computational complexity, as the number of candidate paths scales with the number of objects raised to the power of the interaction order. This bottleneck limits its use in large-scale or real-time applications, forcing traditional tools to rely on heuristics to reduce the number of path candidates at the cost of potentially reduced accuracy. To overcome this limitation, we propose a comprehensive machine-learning-assisted framework that replaces exhaustive path searching with intelligent sampling via Generative Flow Networks. Applying such generative models to this domain presents significant challenges, particularly sparse rewards due to the rarity of valid paths, which can lead to convergence failures and trivial solutions when evaluating high-order interactions in complex environments. To ensure robust learning and efficient exploration, our framework incorporates three key architectural components. First, we implement an \emph{experience replay buffer} to capture and retain rare valid paths. Second, we adopt a uniform exploratory policy to improve generalization and prevent the model from overfitting to simple geometries. Third, we apply a physics-based action masking strategy that filters out physically impossible paths before the model even considers them. As demonstrated in our experimental validation, the proposed model achieves substantial speedups over exhaustive search -- up to $10\times$ faster on GPU and $1000\times$ faster on CPU -- while maintaining high coverage accuracy and successfully uncovering complex propagation paths. The complete source code, tests, and tutorial are available at https://github.com/jeertmans/sampling-paths.
Related papers
- Tail-Aware Post-Training Quantization for 3D Geometry Models [58.79500829118265]
Post-Training Quantization (PTQ) enables efficient inference without retraining.<n>PTQ fails to transfer effectively to 3D models due to intricate feature distributions and prohibitive calibration overhead.<n>We propose TAPTQ, a Tail-Aware Post-Training Quantization pipeline for 3D geometric learning.
arXiv Detail & Related papers (2026-02-02T07:21:15Z) - Barlow-Swin: Toward a novel siamese-based segmentation architecture using Swin-Transformers [1.1083289076967895]
We present a novel end-to-end lightweight architecture designed specifically for real-time binary medical image segmentation.<n>Our model combines a Swin Transformer-like encoder with a U-Net-like decoder, connected via skip pathways to preserve spatial detail.<n>Unlike existing designs such as Swin Transformer or U-Net, our architecture is significantly shallower and competitively efficient.
arXiv Detail & Related papers (2025-09-08T17:05:53Z) - Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models [57.49136894315871]
New paradigm of test-time scaling has yielded remarkable breakthroughs in reasoning models and generative vision models.<n>We propose one solution to the problem of integrating test-time scaling knowledge into a model during post-training.<n>We replace reward guided test-time noise optimization in diffusion models with a Noise Hypernetwork that modulates initial input noise.
arXiv Detail & Related papers (2025-08-13T17:33:37Z) - Enhancing Path Planning Performance through Image Representation Learning of High-Dimensional Configuration Spaces [0.4143603294943439]
We present a novel method for accelerating path-planning tasks in unknown scenes with obstacles.<n>We approximate the distribution of waypoints for a collision-free path using the Rapidly-exploring Random Tree algorithm.<n>Our experiments demonstrate promising results in accelerating path-planning tasks under critical time constraints.
arXiv Detail & Related papers (2025-01-11T21:14:52Z) - Towards Generative Ray Path Sampling for Faster Point-to-Point Ray Tracing [5.567300300236187]
We propose a Machine Learning-aided Ray Tracing approach to efficiently sample potential ray paths.<n>Our model learns to prioritize potentially valid paths among all possible paths and scales linearly with scene complexity.
arXiv Detail & Related papers (2024-10-31T09:42:03Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.<n>DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.<n>Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - A Fast Post-Training Pruning Framework for Transformers [74.59556951906468]
Pruning is an effective way to reduce the huge inference cost of large Transformer models.
Prior work on model pruning requires retraining the model.
We propose a fast post-training pruning framework for Transformers that does not require any retraining.
arXiv Detail & Related papers (2022-03-29T07:41:11Z) - Pyramid Correlation based Deep Hough Voting for Visual Object Tracking [16.080776515556686]
We introduce a voting-based classification-only tracking algorithm named Pyramid Correlation based Deep Hough Voting (short for PCDHV)
Specifically we innovatively construct a Pyramid Correlation module to equip the embedded feature with fine-grained local structures and global spatial contexts.
The elaborately designed Deep Hough Voting module further take over, integrating long-range dependencies of pixels to perceive corners.
arXiv Detail & Related papers (2021-10-15T10:37:00Z) - Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural
Architecture Search [60.965024145243596]
One-shot weight sharing methods have recently drawn great attention in neural architecture search due to high efficiency and competitive performance.
To alleviate this problem, we present a simple yet effective architecture distillation method.
We introduce the concept of prioritized path, which refers to the architecture candidates exhibiting superior performance during training.
Since the prioritized paths are changed on the fly depending on their performance and complexity, the final obtained paths are the cream of the crop.
arXiv Detail & Related papers (2020-10-29T17:55:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.