Using Early Exits for Fast Inference in Automatic Modulation
Classification
- URL: http://arxiv.org/abs/2308.11100v2
- Date: Thu, 9 Nov 2023 18:10:17 GMT
- Title: Using Early Exits for Fast Inference in Automatic Modulation
Classification
- Authors: Elsayed Mohammed, Omar Mashaal and Hatem Abou-Zeid
- Abstract summary: Automatic modulation classification (AMC) plays a critical role in wireless communications by autonomously classifying signals transmitted over the radio spectrum.
Deep learning (DL) techniques are increasingly being used for AMC due to their ability to extract complex wireless signal features.
This paper proposes the application of early exiting (EE) techniques for DL models used for AMC to accelerate inference.
- Score: 7.531126877550286
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic modulation classification (AMC) plays a critical role in wireless
communications by autonomously classifying signals transmitted over the radio
spectrum. Deep learning (DL) techniques are increasingly being used for AMC due
to their ability to extract complex wireless signal features. However, DL
models are computationally intensive and incur high inference latencies. This
paper proposes the application of early exiting (EE) techniques for DL models
used for AMC to accelerate inference. We present and analyze four early exiting
architectures and a customized multi-branch training algorithm for this
problem. Through extensive experimentation, we show that signals with moderate
to high signal-to-noise ratios (SNRs) are easier to classify, do not require
deep architectures, and can therefore leverage the proposed EE architectures.
Our experimental results demonstrate that EE techniques can significantly
reduce the inference speed of deep neural networks without sacrificing
classification accuracy. We also thoroughly study the trade-off between
classification accuracy and inference time when using these architectures. To
the best of our knowledge, this work represents the first attempt to apply
early exiting methods to AMC, providing a foundation for future research in
this area.
Related papers
- Large-Scale Model Enabled Semantic Communication Based on Robust Knowledge Distillation [53.16213723669751]
Large-scale models (LSMs) can be an effective framework for semantic representation and understanding.<n>However, their direct deployment is often hindered by high computational complexity and resource requirements.<n>This paper proposes a novel knowledge distillation based semantic communication framework.
arXiv Detail & Related papers (2025-08-04T07:47:18Z) - A Lightweight Deep Learning Model for Automatic Modulation Classification using Dual Path Deep Residual Shrinkage Network [0.0]
Automatic Modulation Classification (AMC) plays a key role in enhancing spectrum efficiency.<n>There is a pressing need for lightweight AMC models that balance low complexity with high classification accuracy.<n>This paper proposes a low-complexity, lightweight deep learning (DL) AMC model optimized for resource-constrained edge devices.
arXiv Detail & Related papers (2025-07-07T00:37:54Z) - Dynamic Acoustic Model Architecture Optimization in Training for ASR [51.21112094223223]
DMAO is an architecture optimization framework that employs a grow-and-drop strategy to automatically reallocate parameters during training.<n>We evaluate DMAO through experiments with CTC onSpeech, TED-LIUM-v2 and Switchboard datasets.
arXiv Detail & Related papers (2025-06-16T07:47:34Z) - Plug-and-Play AMC: Context Is King in Training-Free, Open-Set Modulation with LLMs [22.990537822143907]
Automatic Modulation Classification (AMC) is critical for efficient spectrum management and robust wireless communications.<n>We propose an innovative framework that integrates traditional signal processing techniques with Large-Language Models.<n>This work lays the foundation for scalable, interpretable, and versatile signal classification systems in next-generation wireless networks.
arXiv Detail & Related papers (2025-05-06T02:07:47Z) - Towards Explainable Machine Learning: The Effectiveness of Reservoir
Computing in Wireless Receive Processing [21.843365090029987]
We investigate the specific task of channel equalization by applying a popular learning-based technique known as Reservoir Computing (RC)
RC has shown superior performance compared to conventional methods and other learning-based approaches.
We also show the improvement in receive processing/symbol detection performance with this optimized through simulations.
arXiv Detail & Related papers (2023-10-08T00:44:35Z) - Training dynamic models using early exits for automatic speech
recognition on resource-constrained devices [15.879328412777008]
Early-exit architectures enable the development of dynamic models capable of adapting their size and architecture to varying levels of computational resources and ASR performance demands.
We show that early-exit models trained from scratch not only preserve performance when using fewer encoder layers but also exhibit enhanced task accuracy compared to single-exit or pre-trained models.
Results provide insights into the training dynamics of early-exit architectures for ASR models.
arXiv Detail & Related papers (2023-09-18T07:45:16Z) - Denoising Diffusion Autoencoders are Unified Self-supervised Learners [58.194184241363175]
This paper shows that the networks in diffusion models, namely denoising diffusion autoencoders (DDAE), are unified self-supervised learners.
DDAE has already learned strongly linear-separable representations within its intermediate layers without auxiliary encoders.
Our diffusion-based approach achieves 95.9% and 50.0% linear evaluation accuracies on CIFAR-10 and Tiny-ImageNet.
arXiv Detail & Related papers (2023-03-17T04:20:47Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Learning OFDM Waveforms with PAPR and ACLR Constraints [15.423422040627331]
We propose a learning-based method to design OFDM-based waveforms that satisfy selected constraints while maximizing an achievable information rate.
We show that the end-to-end system is able to satisfy target PAPR and ACLR constraints and allows significant throughput gains.
arXiv Detail & Related papers (2021-10-21T08:58:59Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.