An Efficient Deep Learning Model for Automatic Modulation Recognition
Based on Parameter Estimation and Transformation
- URL: http://arxiv.org/abs/2110.04980v1
- Date: Mon, 11 Oct 2021 03:28:28 GMT
- Title: An Efficient Deep Learning Model for Automatic Modulation Recognition
Based on Parameter Estimation and Transformation
- Authors: Fuxin Zhang, Chunbo Luo, Jialang Xu, and Yang Luo
- Abstract summary: This letter proposes an efficient DL-AMR model based on phase parameter estimation and transformation.
Our model is more competitive in training time and test time than the benchmark models with similar recognition accuracy.
- Score: 3.3941243094128035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic modulation recognition (AMR) is a promising technology for
intelligent communication receivers to detect signal modulation schemes.
Recently, the emerging deep learning (DL) research has facilitated
high-performance DL-AMR approaches. However, most DL-AMR models only focus on
recognition accuracy, leading to huge model sizes and high computational
complexity, while some lightweight and low-complexity models struggle to meet
the accuracy requirements. This letter proposes an efficient DL-AMR model based
on phase parameter estimation and transformation, with convolutional neural
network (CNN) and gated recurrent unit (GRU) as the feature extraction layers,
which can achieve high recognition accuracy equivalent to the existing
state-of-the-art models but reduces more than a third of the volume of their
parameters. Meanwhile, our model is more competitive in training time and test
time than the benchmark models with similar recognition accuracy. Moreover, we
further propose to compress our model by pruning, which maintains the
recognition accuracy higher than 90% while has less than 1/8 of the number of
parameters comparing with state-of-the-art models.
Related papers
- Transfer Learning Based Hybrid Quantum Neural Network Model for Surface Anomaly Detection [0.4604003661048266]
This paper presents a quantum transfer learning (QTL) based approach to significantly reduce the number of parameters of the classical models.
We show that we could reduce the total number of trainable parameters up to 90% of the initial model without any drop in performance.
arXiv Detail & Related papers (2024-08-30T19:40:52Z) - LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition [4.375744277719009]
LORTSAR is applied to two leading Transformer-based models, "Hyperformer" and "STEP-CATFormer"
Our method can reduce the number of model parameters substantially with negligible degradation or even performance increase in recognition accuracy.
This confirms that SVD combined with post-compression fine-tuning can boost model efficiency, paving the way for more sustainable, lightweight, and high-performance technologies in human action recognition.
arXiv Detail & Related papers (2024-07-19T20:19:41Z) - Neural Language Model Pruning for Automatic Speech Recognition [4.10609794373612]
We study model pruning methods applied to Transformer-based neural network language models for automatic speech recognition.
We explore three aspects of the pruning frame work, namely criterion, method and scheduler, analyzing their contribution in terms of accuracy and inference speed.
arXiv Detail & Related papers (2023-10-05T10:01:32Z) - Heterogeneous Reservoir Computing Models for Persian Speech Recognition [0.0]
Reservoir computing models (RC) models have been proven inexpensive to train, have vastly fewer parameters, and are compatible with emergent hardware technologies.
We propose heterogeneous single and multi-layer ESNs to create non-linear transformations of the inputs that capture temporal context at different scales.
arXiv Detail & Related papers (2022-05-25T09:15:15Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Balancing Accuracy and Latency in Multipath Neural Networks [0.09668407688201358]
We use a one-shot neural architecture search model to implicitly evaluate the performance of an intractable number of neural networks.
We show that our method can accurately model the relative performance between models with different latencies and predict the performance of unseen models with good precision across different datasets.
arXiv Detail & Related papers (2021-04-25T00:05:48Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.