Accumulated Polar Feature-based Deep Learning for Efficient and
Lightweight Automatic Modulation Classification with Channel Compensation
Mechanism
- URL: http://arxiv.org/abs/2001.01395v2
- Date: Fri, 7 Feb 2020 16:19:04 GMT
- Title: Accumulated Polar Feature-based Deep Learning for Efficient and
Lightweight Automatic Modulation Classification with Channel Compensation
Mechanism
- Authors: Chieh-Fang Teng, Ching-Yao Chou, Chun-Hsiang Chen, and An-Yeu Wu
- Abstract summary: In next-generation communications, massive machine-type communications (mMTC) induce severe burden on base stations.
Deep learning (DL) technique stores intelligence in the network, resulting in superior performance over traditional approaches.
In this work, an accumulated polar feature-based DL with a channel compensation mechanism is proposed to cope with the aforementioned issues.
- Score: 6.915743897443897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In next-generation communications, massive machine-type communications (mMTC)
induce severe burden on base stations. To address such an issue, automatic
modulation classification (AMC) can help to reduce signaling overhead by
blindly recognizing the modulation types without handshaking. Thus, it plays an
important role in future intelligent modems. The emerging deep learning (DL)
technique stores intelligence in the network, resulting in superior performance
over traditional approaches. However, conventional DL-based approaches suffer
from heavy training overhead, memory overhead, and computational complexity,
which severely hinder practical applications for resource-limited scenarios,
such as Vehicle-to-Everything (V2X) applications. Furthermore, the overhead of
online retraining under time-varying fading channels has not been studied in
the prior arts. In this work, an accumulated polar feature-based DL with a
channel compensation mechanism is proposed to cope with the aforementioned
issues. Firstly, the simulation results show that learning features from the
polar domain with historical data information can approach near-optimal
performance while reducing training overhead by 99.8 times. Secondly, the
proposed neural network-based channel estimator (NN-CE) can learn the channel
response and compensate for the distorted channel with 13% improvement.
Moreover, in applying this lightweight NN-CE in a time-varying fading channel,
two efficient mechanisms of online retraining are proposed, which can reduce
transmission overhead and retraining overhead by 90% and 76%, respectively.
Finally, the performance of the proposed approach is evaluated and compared
with prior arts on a public dataset to demonstrate its great efficiency and
lightness.
Related papers
- Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Efficient Deep Unfolding for SISO-OFDM Channel Estimation [0.0]
It is possible to perform SISO-OFDM channel estimation using sparse recovery techniques.
In this paper, an unfolded neural network is used to lighten this constraint.
Its unsupervised online learning allows to learn the system's imperfections in order to enhance the estimation performance.
arXiv Detail & Related papers (2022-10-11T11:29:54Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization [61.71504948770445]
We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
arXiv Detail & Related papers (2021-10-21T06:26:31Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Boosting the Convergence of Reinforcement Learning-based Auto-pruning
Using Historical Data [35.36703623383735]
Reinforcement learning (RL)-based auto-pruning has been proposed to automate the pruning process to avoid expensive hand-crafted work.
However, the RL-based pruner involves a time-consuming training process and the high expense of each sample further exacerbates this problem.
We propose an efficient auto-pruning framework which solves this problem by taking advantage of the historical data from the previous auto-pruning process.
arXiv Detail & Related papers (2021-07-16T07:17:26Z) - Residual-Aided End-to-End Learning of Communication System without Known
Channel [12.66262880667583]
A generative adversarial network (GAN) based training scheme has been recently proposed to imitate the real channel.
We propose a residual aided GAN (RA-GAN) based training scheme in this paper.
We show that the proposed RA-GAN based training scheme can achieve the near-optimal block error rate (BLER) performance.
arXiv Detail & Related papers (2021-02-22T05:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.