Investigating Map-Based Path Loss Models: A Study of Feature Representations in Convolutional Neural Networks
- URL: http://arxiv.org/abs/2501.07534v1
- Date: Mon, 13 Jan 2025 18:15:01 GMT
- Title: Investigating Map-Based Path Loss Models: A Study of Feature Representations in Convolutional Neural Networks
- Authors: Ryan G. Dempsey, Jonathan Ethier, Halim Yanikomeroglu,
- Abstract summary: We investigate different methods of representing scalar features in convolutional neural networks.
We find that representing scalar features as image channels results in the strongest generalization.
- Score: 20.62701088477552
- License:
- Abstract: Path loss prediction is a beneficial tool for efficient use of the radio frequency spectrum. Building on prior research on high-resolution map-based path loss models, this paper studies convolutional neural network input representations in more detail. We investigate different methods of representing scalar features in convolutional neural networks. Specifically, we compare using frequency and distance as input channels to convolutional layers or as scalar inputs to regression layers. We assess model performance using three different feature configurations and find that representing scalar features as image channels results in the strongest generalization.
Related papers
- WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction
from Variable-Sized Maps [11.327456466796681]
Estimating path loss for a transmitter-receiver location is key to many use-cases including network planning and handover.
We present a transformer-based neural network architecture that enables predicting link-level properties from maps of various dimensions and from sparse measurements.
arXiv Detail & Related papers (2023-10-06T20:17:40Z) - Feature Gradient Flow for Interpreting Deep Neural Networks in Head and
Neck Cancer Prediction [2.9477900773805032]
This paper introduces feature gradient flow, a new technique for interpreting deep learning models in terms of features that are understandable to humans.
We measure the agreement of interpretable features with the gradient flow of a model.
We develop a technique for training neural networks to be more interpretable by adding a regularization term to the loss function.
arXiv Detail & Related papers (2023-07-24T18:25:59Z) - Anisotropic Multi-Scale Graph Convolutional Network for Dense Shape
Correspondence [3.45989531033125]
This paper studies 3D dense shape correspondence, a key shape analysis application in computer vision and graphics.
We introduce a novel hybrid geometric deep learning-based model that learns geometrically meaningful and discretization-independent features.
The resulting correspondence maps show state-of-the-art performance on the benchmark datasets.
arXiv Detail & Related papers (2022-10-17T22:40:50Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Adaptive Exploitation of Pre-trained Deep Convolutional Neural Networks
for Robust Visual Tracking [14.627458410954628]
This paper provides a comprehensive analysis of four commonly used CNN models to determine the best feature maps of each model.
With the aid of analysis results as attribute dictionaries, adaptive exploitation of deep features is proposed to improve the accuracy and robustness of visual trackers.
arXiv Detail & Related papers (2020-08-29T17:09:43Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.