Ensemble Wrapper Subsampling for Deep Modulation Classification
- URL: http://arxiv.org/abs/2005.04586v1
- Date: Sun, 10 May 2020 06:11:13 GMT
- Title: Ensemble Wrapper Subsampling for Deep Modulation Classification
- Authors: Sharan Ramjee, Shengtai Ju, Diyu Yang, Xiaoyu Liu, Aly El Gamal,
Yonina C. Eldar
- Abstract summary: Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
- Score: 70.91089216571035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subsampling of received wireless signals is important for relaxing hardware
requirements as well as the computational cost of signal processing algorithms
that rely on the output samples. We propose a subsampling technique to
facilitate the use of deep learning for automatic modulation classification in
wireless communication systems. Unlike traditional approaches that rely on
pre-designed strategies that are solely based on expert knowledge, the proposed
data-driven subsampling strategy employs deep neural network architectures to
simulate the effect of removing candidate combinations of samples from each
training input vector, in a manner inspired by how wrapper feature selection
models work. The subsampled data is then processed by another deep learning
classifier that recognizes each of the considered 10 modulation types. We show
that the proposed subsampling strategy not only introduces drastic reduction in
the classifier training time, but can also improve the classification accuracy
to higher levels than those reached before for the considered dataset. An
important feature herein is exploiting the transferability property of deep
neural networks to avoid retraining the wrapper models and obtain superior
performance through an ensemble of wrappers over that possible through solely
relying on any of them.
Related papers
- Augmenting Radio Signals with Wavelet Transform for Deep Learning-Based
Modulation Recognition [6.793444383222236]
Deep learning for radio modulation recognition has become prevalent in recent years.
In real-world scenarios, it may not be feasible to gather sufficient training data in advance.
Data augmentation is a method used to increase the diversity and quantity of training dataset.
arXiv Detail & Related papers (2023-11-07T06:55:39Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Gradient-based Adversarial Deep Modulation Classification with
Data-driven Subsampling [6.447052211404121]
Deep learning techniques have been shown to deliver superior performance to conventional model-based strategies.
Deep learning techniques have also been shown to be vulnerable to gradient-based adversarial attacks.
We consider a data-driven subsampling setting, where several recently introduced deep-learning-based algorithms are employed.
We evaluate best strategies under various assumptions on the knowledge of the other party's strategy.
arXiv Detail & Related papers (2021-04-03T22:28:04Z) - An evidential classifier based on Dempster-Shafer theory and deep
learning [6.230751621285322]
We propose a new classification system based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification.
Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy.
arXiv Detail & Related papers (2021-03-25T01:29:05Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.