An empirical study on using CNNs for fast radio signal prediction
- URL: http://arxiv.org/abs/2006.09245v3
- Date: Mon, 20 Sep 2021 17:10:55 GMT
- Title: An empirical study on using CNNs for fast radio signal prediction
- Authors: Ozan Ozyegen and Sanaz Mohammadjafari and Karim El mokhtari and
Mucahit Cevik and Jonathan Ethier and Ayse Basar
- Abstract summary: We consider a dataset that consists of radio frequency power values for five different regions with four different frame dimensions.
We compare deep learning-based prediction models including RadioUNET and four different variations of the UNET model for the power prediction task.
Our detailed numerical analysis shows that the deep learning models are effective in power prediction and they are able to generalize well to the new regions.
- Score: 0.39146761527401425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate radio frequency power prediction in a geographic region is a
computationally expensive part of finding the optimal transmitter location
using a ray tracing software. We empirically analyze the viability of deep
learning models to speed up this process. Specifically, deep learning methods
including CNNs and UNET are typically used for segmentation, and can also be
employed in power prediction tasks. We consider a dataset that consists of
radio frequency power values for five different regions with four different
frame dimensions. We compare deep learning-based prediction models including
RadioUNET and four different variations of the UNET model for the power
prediction task. More complex UNET variations improve the model on higher
resolution frames such as 256x256. However, using the same models on lower
resolutions results in overfitting and simpler models perform better. Our
detailed numerical analysis shows that the deep learning models are effective
in power prediction and they are able to generalize well to the new regions.
Related papers
- A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Data-Driven Target Localization Using Adaptive Radar Processing and Convolutional Neural Networks [18.50309014013637]
This paper presents a data-driven approach to improve radar target localization post adaptive radar detection.
We produce heatmap tensors from the radar returns, in range, azimuth [and Doppler], of the normalized adaptive matched filter (NAMF) test statistic.
We then train a regression convolutional neural network (CNN) to estimate target locations from these heatmap tensors.
arXiv Detail & Related papers (2022-09-07T02:23:40Z) - Exploration of Various Deep Learning Models for Increased Accuracy in
Automatic Polyp Detection [62.997667081978825]
This paper explores deep learning models and algorithms that results in highest accuracy in detecting polyp on colonoscopy images.
Previous studies implemented deep learning using convolution neural network (CNN)
arXiv Detail & Related papers (2022-03-04T04:03:41Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Efficient deep learning models for land cover image classification [0.29748898344267777]
This work experiments with the BigEarthNet dataset for land use land cover (LULC) image classification.
We benchmark different state-of-the-art models, including Convolution Neural Networks, Multi-Layer Perceptrons, Visual Transformers, EfficientNets and Wide Residual Networks (WRN)
Our proposed lightweight model has an order of magnitude less trainable parameters, achieves 4.5% higher averaged f-score classification accuracy for all 19 LULC classes and is trained two times faster with respect to a ResNet50 state-of-the-art model that we use as a baseline.
arXiv Detail & Related papers (2021-11-18T00:03:14Z) - Cellular Network Radio Propagation Modeling with Deep Convolutional
Neural Networks [7.890819981813062]
We present a novel method to model radio propagation using deep convolutional neural networks.
We also lay down the framework for data-driven modeling of radio propagation.
arXiv Detail & Related papers (2021-10-05T07:20:48Z) - Greedy Network Enlarging [53.319011626986004]
We propose a greedy network enlarging method based on the reallocation of computations.
With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.
With application of our method on GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies.
arXiv Detail & Related papers (2021-07-31T08:36:30Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - DEEPF0: End-To-End Fundamental Frequency Estimation for Music and Speech
Signals [11.939409227407769]
We propose a novel pitch estimation technique called DeepF0.
It leverages the available annotated data to directly learn from the raw audio in a data-driven manner.
arXiv Detail & Related papers (2021-02-11T23:11:22Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.