NMformer: A Transformer for Noisy Modulation Classification in Wireless Communication
- URL: http://arxiv.org/abs/2411.02428v1
- Date: Wed, 30 Oct 2024 21:10:12 GMT
- Title: NMformer: A Transformer for Noisy Modulation Classification in Wireless Communication
- Authors: Atik Faysal, Mohammad Rostami, Reihaneh Gh. Roshan, Huaxia Wang, Nikhil Muralidhar,
- Abstract summary: We propose a vision transformer (ViT) based model named NMformer to predict the channel modulation images with different noise levels in wireless communication.
Since ViTs are most effective for RGB images, we generated constellation diagrams from the modulated signals.
Our proposed model has two different kinds of prediction setups: in-distribution and out-of-distribution.
- Score: 19.225546116534165
- License:
- Abstract: Modulation classification is a very challenging task since the signals intertwine with various ambient noises. Methods are required that can classify them without adding extra steps like denoising, which introduces computational complexity. In this study, we propose a vision transformer (ViT) based model named NMformer to predict the channel modulation images with different noise levels in wireless communication. Since ViTs are most effective for RGB images, we generated constellation diagrams from the modulated signals. The diagrams provide the information from the signals in a 2-D representation form. We trained NMformer on 106, 800 modulation images to build the base classifier and only used 3, 000 images to fine-tune for specific tasks. Our proposed model has two different kinds of prediction setups: in-distribution and out-of-distribution. Our model achieves 4.67% higher accuracy than the base classifier when finetuned and tested on high signal-to-noise ratios (SNRs) in-distribution classes. Moreover, the fine-tuned low SNR task achieves a higher accuracy than the base classifier. The fine-tuned classifier becomes much more effective than the base classifier by achieving higher accuracy when predicted, even on unseen data from out-of-distribution classes. Extensive experiments show the effectiveness of NMformer for a wide range of SNRs.
Related papers
- Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Data Augmentation in Training CNNs: Injecting Noise to Images [0.0]
This study analyzes the effects of adding or applying different noise models of varying magnitudes to CNN architectures.
Basic results are conforming to the most of the common notions in machine learning.
New approaches will provide better understanding on optimal learning procedures for image classification.
arXiv Detail & Related papers (2023-07-12T17:29:42Z) - Modulation Classification Through Deep Learning Using Resolution
Transformed Spectrograms [3.9511559419116224]
We propose a scheme for Automatic Modulation Classification (AMC) using modern architectures of Convolutional Neural Networks (CNN)
We perform resolution transformation of spectrograms that results up to 99.61% of computational load reduction and 8x faster conversion from the received I/Q data.
The performance is evaluated on existing CNN models including SqueezeNet, Resnet-50, InceptionResnet-V2, Inception-V3, VGG-16 and Densenet-201.
arXiv Detail & Related papers (2023-06-06T16:14:15Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - BatchFormerV2: Exploring Sample Relationships for Dense Representation
Learning [88.82371069668147]
BatchFormerV2 is a more general batch Transformer module, which enables exploring sample relationships for dense representation learning.
BatchFormerV2 consistently improves current DETR-based detection methods by over 1.3%.
arXiv Detail & Related papers (2022-04-04T05:53:42Z) - Treatment Learning Causal Transformer for Noisy Image Classification [62.639851972495094]
In this work, we incorporate this binary information of "existence of noise" as treatment into image classification tasks to improve prediction accuracy.
Motivated from causal variational inference, we propose a transformer-based architecture, that uses a latent generative model to estimate robust feature representations for noise image classification.
We also create new noisy image datasets incorporating a wide range of noise factors for performance benchmarking.
arXiv Detail & Related papers (2022-03-29T13:07:53Z) - Ensemble Augmentation for Deep Neural Networks Using 1-D Time Series
Vibration Data [0.0]
Time-series data are one of the fundamental types of raw data representation used in data-driven techniques.
Deep Neural Networks (DNNs) require huge labeled training samples to reach their optimum performance.
In this study, a data augmentation technique named ensemble augmentation is proposed to overcome this limitation.
arXiv Detail & Related papers (2021-08-06T20:04:29Z) - Adaptive Denoising via GainTuning [17.72738152112575]
Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets.
We propose "GainTuning", in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images.
We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set.
arXiv Detail & Related papers (2021-07-27T13:35:48Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.