Automatic Modulation Classification with Deep Neural Networks
- URL: http://arxiv.org/abs/2301.11773v1
- Date: Fri, 27 Jan 2023 15:16:06 GMT
- Title: Automatic Modulation Classification with Deep Neural Networks
- Authors: Clayton Harper, Mitchell Thornton and Eric Larson
- Abstract summary: We show that a combination of dilated convolutions, statistics pooling, and squeeze-and-excitation units results in the strongest performing classifier.
We further investigate this best performer according to various other criteria, including short signal bursts, common misclassifications, and performance across differing modulation categories and modes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic modulation classification is a desired feature in many modern
software-defined radios. In recent years, a number of convolutional deep
learning architectures have been proposed for automatically classifying the
modulation used on observed signal bursts. However, a comprehensive analysis of
these differing architectures and importance of each design element has not
been carried out. Thus it is unclear what tradeoffs the differing designs of
these convolutional neural networks might have. In this research, we
investigate numerous architectures for automatic modulation classification and
perform a comprehensive ablation study to investigate the impacts of varying
hyperparameters and design elements on automatic modulation classification
performance. We show that a new state of the art in performance can be achieved
using a subset of the studied design elements. In particular, we show that a
combination of dilated convolutions, statistics pooling, and
squeeze-and-excitation units results in the strongest performing classifier. We
further investigate this best performer according to various other criteria,
including short signal bursts, common misclassifications, and performance
across differing modulation categories and modes.
Related papers
- Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Sparse Modular Activation for Efficient Sequence Modeling [94.11125833685583]
Recent models combining Linear State Space Models with self-attention mechanisms have demonstrated impressive results across a range of sequence modeling tasks.
Current approaches apply attention modules statically and uniformly to all elements in the input sequences, leading to sub-optimal quality-efficiency trade-offs.
We introduce Sparse Modular Activation (SMA), a general mechanism enabling neural networks to sparsely activate sub-modules for sequence elements in a differentiable manner.
arXiv Detail & Related papers (2023-06-19T23:10:02Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [82.32018252867277]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.
We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - Assessing the Impact of Attention and Self-Attention Mechanisms on the
Classification of Skin Lesions [0.0]
We focus on two forms of attention mechanisms: attention modules and self-attention.
Attention modules are used to reweight the features of each layer input tensor.
Self-Attention, originally proposed in the area of Natural Language Processing makes it possible to relate all the items in an input sequence.
arXiv Detail & Related papers (2021-12-23T18:02:48Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - A Novel Automatic Modulation Classification Scheme Based on Multi-Scale
Networks [35.04402595330191]
A novel automatic modulation classification scheme is proposed by using the multi-scale network in this paper.
A novel loss function that combines the center loss and the cross entropy loss is exploited to learn both discriminative and separable features.
Our proposed automatic modulation classification scheme can achieve better performance than the benchmark schemes in terms of the classification accuracy.
arXiv Detail & Related papers (2021-05-31T15:18:58Z) - Polynomial Networks in Deep Classifiers [55.90321402256631]
We cast the study of deep neural networks under a unifying framework.
Our framework provides insights on the inductive biases of each model.
The efficacy of the proposed models is evaluated on standard image and audio classification benchmarks.
arXiv Detail & Related papers (2021-04-16T06:41:20Z) - Enhancing efficiency of object recognition in different categorization
levels by reinforcement learning in modular spiking neural networks [1.392250707100996]
We propose a computational model for object recognition in different categorization levels.
A spiking neural network equipped with the reinforcement learning rule is used as a module at each categorization level.
According to the required information at each categorization level, the relevant band-pass filtered images are utilized.
arXiv Detail & Related papers (2021-02-10T12:33:20Z) - High-Capacity Complex Convolutional Neural Networks For I/Q Modulation
Classification [0.0]
We claim state of the art performance by enabling high-capacity architectures containing residual and/or dense connections to compute complex-valued convolutions.
We show statistically significant improvements in all networks with complex convolutions for I/Q modulation classification.
arXiv Detail & Related papers (2020-10-21T02:26:24Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.