Deep Multi-Scale Representation Learning with Attention for Automatic
Modulation Classification
- URL: http://arxiv.org/abs/2209.03764v1
- Date: Wed, 31 Aug 2022 07:26:09 GMT
- Title: Deep Multi-Scale Representation Learning with Attention for Automatic
Modulation Classification
- Authors: Xiaowei Wu, Shengyun Wei, Yan Zhou
- Abstract summary: We find some experienced improvements by using large kernel size for convolutional deep convolution neural network based AMC.
We propose a multi-scale feature network with large kernel size and SE mechanism (SE-MSFN) in this paper.
SE-MSFN achieves state-of-the-art classification performance on the public well-known RADIOML 2018.01A dataset.
- Score: 11.32380278232938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, deep learning methods with stacking small size convolutional
filters are widely used for automatic modulation classification (AMC). In this
report, we find some experienced improvements by using large kernel size for
convolutional deep convolution neural network based AMC, which is more
efficient in extracting multi-scale features of the raw signal I/Q sequence
data. Also, Squeeze-and-Excitation (SE) mechanisms can significantly help AMC
networks to focus on the more important features of the signal. As a result, we
propose a multi-scale feature network with large kernel size and SE mechanism
(SE-MSFN) in this paper. SE-MSFN achieves state-of-the-art classification
performance on the public well-known RADIOML 2018.01A dataset, with average
classification accuracy of 64.50%, surpassing CLDNN by 1.42%, maximum
classification accuracy of 98.5%, and an average classification accuracy of
85.53% in the lower SNR range 0dB to 10dB, surpassing CLDNN by 2.85%. In
addition, we also verified that ensemble learning can help further improve
classification performance. We hope this report can provide some references for
developers and researchers in practical scenes.
Related papers
- Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis [93.0013343535411]
We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
arXiv Detail & Related papers (2023-06-16T18:29:26Z) - Nearest Neighbor Zero-Shot Inference [68.56747574377215]
kNN-Prompt is a technique to use k-nearest neighbor (kNN) retrieval augmentation for zero-shot inference with language models (LMs)
fuzzy verbalizers leverage the sparse kNN distribution for downstream tasks by automatically associating each classification label with a set of natural language tokens.
Experiments show that kNN-Prompt is effective for domain adaptation with no further training, and that the benefits of retrieval increase with the size of the model used for kNN retrieval.
arXiv Detail & Related papers (2022-05-27T07:00:59Z) - Automatic Machine Learning for Multi-Receiver CNN Technology Classifiers [16.244541005112747]
Convolutional Neural Networks (CNNs) are one of the most studied family of deep learning models for signal classification.
We focus on technology classification based on raw I/Q samples collected from multiple synchronized receivers.
arXiv Detail & Related papers (2022-04-28T23:41:38Z) - An Efficient End-to-End Deep Neural Network for Interstitial Lung
Disease Recognition and Classification [0.5424799109837065]
This paper introduces an end-to-end deep convolution neural network (CNN) for classifying ILDs patterns.
The proposed model comprises four convolutional layers with different kernel sizes and Rectified Linear Unit (ReLU) activation function.
A dataset consisting of 21328 image patches of 128 CT scans with five classes is taken to train and assess the proposed model.
arXiv Detail & Related papers (2022-04-21T06:36:10Z) - Dynamic Multi-scale Convolution for Dialect Identification [18.132769601922682]
We propose dynamic multi-scale convolution, which consists of dynamic kernel convolution, local multi-scale learning, and global multi-scale pooling.
The proposed architecture significantly outperforms state-of-the-art system on the AP20-OLR-dialect-task of oriental language recognition.
arXiv Detail & Related papers (2021-08-02T03:37:15Z) - A SPA-based Manifold Learning Framework for Motor Imagery EEG Data
Classification [2.4727719996518487]
This paper proposes a manifold learning framework to classify two types of EEG data from motor imagery (MI) tasks.
For feature extraction, it is implemented by Common Spatial Pattern (CSP) from the preprocessed EEG signals.
In the neighborhoods of the features for classification, the local approximation to the support of the data is obtained, and then the features are assigned to the classes with the closest support.
arXiv Detail & Related papers (2021-07-30T06:18:05Z) - Convolutional Neural Networks in Multi-Class Classification of Medical
Data [0.9137554315375922]
We introduce an ensemble model that consists of both deep learning (CNN) and shallow learning models (Gradient Boosting)
The method achieves Accuracy of 64.93, the highest three-class classification accuracy we achieved in this study.
arXiv Detail & Related papers (2020-12-28T02:04:38Z) - Delving Deep into Label Smoothing [112.24527926373084]
Label smoothing is an effective regularization tool for deep neural networks (DNNs)
We present an Online Label Smoothing (OLS) strategy, which generates soft labels based on the statistics of the model prediction for the target category.
arXiv Detail & Related papers (2020-11-25T08:03:11Z) - A Two-Stage Approach to Device-Robust Acoustic Scene Classification [63.98724740606457]
Two-stage system based on fully convolutional neural networks (CNNs) is proposed to improve device robustness.
Our results show that the proposed ASC system attains a state-of-the-art accuracy on the development set.
Neural saliency analysis with class activation mapping gives new insights on the patterns learnt by our models.
arXiv Detail & Related papers (2020-11-03T03:27:18Z) - Device-Robust Acoustic Scene Classification Based on Two-Stage
Categorization and Data Augmentation [63.98724740606457]
We present a joint effort of four groups, namely GT, USTC, Tencent, and UKE, to tackle Task 1 - Acoustic Scene Classification (ASC) in the DCASE 2020 Challenge.
Task 1a focuses on ASC of audio signals recorded with multiple (real and simulated) devices into ten different fine-grained classes.
Task 1b concerns with classification of data into three higher-level classes using low-complexity solutions.
arXiv Detail & Related papers (2020-07-16T15:07:14Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.