Efficient Training of Deep Classifiers for Wireless Source
Identification using Test SNR Estimates
- URL: http://arxiv.org/abs/1912.11896v2
- Date: Sun, 19 Apr 2020 01:56:08 GMT
- Title: Efficient Training of Deep Classifiers for Wireless Source
Identification using Test SNR Estimates
- Authors: Xingchen Wang, Shengtai Ju, Xiwen Zhang, Sharan Ramjee, Aly El Gamal
- Abstract summary: We study efficient deep learning training algorithms that process wireless signals if a test Signal to Noise Ratio (SNR) estimate is available.
For benchmarking, we rely on recent literature on testing deep learning algorithms against two well-known datasets.
An erroneous test SNR estimate with a small positive offset is better for training than another having the same error magnitude with a negative offset.
- Score: 4.44483539967295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study efficient deep learning training algorithms that process received
wireless signals, if a test Signal to Noise Ratio (SNR) estimate is available.
We focus on two tasks that facilitate source identification: 1- Identifying the
modulation type, 2- Identifying the wireless technology and channel in the 2.4
GHz ISM band. For benchmarking, we rely on recent literature on testing deep
learning algorithms against two well-known datasets. We first demonstrate that
using training data corresponding only to the test SNR value leads to dramatic
reductions in training time while incurring a small loss in average test
accuracy, as it improves the accuracy for low SNR values. Further, we show that
an erroneous test SNR estimate with a small positive offset is better for
training than another having the same error magnitude with a negative offset.
Secondly, we introduce a greedy training SNR Boosting algorithm that leads to
uniform improvement in accuracy across all tested SNR values, while using a
small subset of training SNR values at each test SNR. Finally, we demonstrate
the potential of bootstrap aggregating (Bagging) based on training SNR values
to improve generalization at low test SNR values with scarcity of training
data.
Related papers
- Neural Priming for Sample-Efficient Adaptation [92.14357804106787]
We propose Neural Priming, a technique for adapting large pretrained models to distribution shifts and downstream tasks.
Neural Priming can be performed at test time, even for pretraining as large as LAION-2B.
arXiv Detail & Related papers (2023-06-16T21:53:16Z) - A Comparative Study of Deep Learning and Iterative Algorithms for Joint Channel Estimation and Signal Detection in OFDM Systems [11.190815358585137]
Joint channel estimation and signal detection is crucial in frequency division multiplexing systems.
Traditional algorithms perform poorly in low signal-to-noise ratio (SNR) scenarios.
Deep learning (DL) methods have been investigated, but concerns regarding computational expense and lack of validation in low-SNR settings remain.
arXiv Detail & Related papers (2023-03-07T06:34:04Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Training Strategies for Deep Learning Gravitational-Wave Searches [43.55994393060723]
We restrict our analysis to signals from non-spinning binary black holes.
We systematically test different strategies by which training data is presented to the networks.
We find that the deep learning algorithms can generalize low signal-to-noise ratio (SNR) signals to high SNR ones but not vice versa.
arXiv Detail & Related papers (2021-06-07T16:04:29Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Identifying Training Stop Point with Noisy Labeled Data [0.0]
We develop an algorithm to find a training stop point (TSP) at or close to test accuracy (MOTA)
We validated the robustness of our algorithm (AutoTSP) through several experiments on CIFAR-10, CIFAR-100, and a real-world noisy dataset.
arXiv Detail & Related papers (2020-12-24T20:07:30Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - RNN Training along Locally Optimal Trajectories via Frank-Wolfe
Algorithm [50.76576946099215]
We propose a novel and efficient training method for RNNs by iteratively seeking a local minima on the loss surface within a small region.
We develop a novel RNN training method that, surprisingly, even with the additional cost, the overall training cost is empirically observed to be lower than back-propagation.
arXiv Detail & Related papers (2020-10-12T01:59:18Z) - Training Sparse Neural Networks using Compressed Sensing [13.84396596420605]
We develop and test a novel method based on compressed sensing which combines the pruning and training into a single step.
Specifically, we utilize an adaptively weighted $ell1$ penalty on the weights during training, which we combine with a generalization of the regularized dual averaging (RDA) algorithm in order to train sparse neural networks.
arXiv Detail & Related papers (2020-08-21T19:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.