Large Scale Radio Frequency Signal Classification
- URL: http://arxiv.org/abs/2207.09918v1
- Date: Wed, 20 Jul 2022 14:03:57 GMT
- Title: Large Scale Radio Frequency Signal Classification
- Authors: Luke Boegner, Manbir Gulati, Garrett Vanhoy, Phillip Vallance, Bradley
Comar, Silvija Kokalj-Filipovic, Craig Lennon, Robert D. Miller
- Abstract summary: We introduce the Sig53 dataset consisting of 5 million synthetically-generated samples from 53 different signal classes.
We also introduce TorchSig, a signals processing machine learning toolkit that can be used to generate this dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing datasets used to train deep learning models for narrowband radio
frequency (RF) signal classification lack enough diversity in signal types and
channel impairments to sufficiently assess model performance in the real world.
We introduce the Sig53 dataset consisting of 5 million synthetically-generated
samples from 53 different signal classes and expertly chosen impairments. We
also introduce TorchSig, a signals processing machine learning toolkit that can
be used to generate this dataset. TorchSig incorporates data handling
principles that are common to the vision domain, and it is meant to serve as an
open-source foundation for future signals machine learning research. Initial
experiments using the Sig53 dataset are conducted using state of the art (SoTA)
convolutional neural networks (ConvNets) and Transformers. These experiments
reveal Transformers outperform ConvNets without the need for additional
regularization or a ConvNet teacher, which is contrary to results from the
vision domain. Additional experiments demonstrate that TorchSig's
domain-specific data augmentations facilitate model training, which ultimately
benefits model performance. Finally, TorchSig supports on-the-fly synthetic
data creation at training time, thus enabling massive scale training sessions
with virtually unlimited datasets.
Related papers
- Multi-Scale Convolutional LSTM with Transfer Learning for Anomaly Detection in Cellular Networks [1.1432909951914676]
This study introduces a novel approach Multi-Scale Convolutional LSTM with Transfer Learning (TL) to detect anomalies in cellular networks.
The model is initially trained from scratch using a publicly available dataset to learn typical network behavior.
We compare the performance of the model trained from scratch with that of the fine-tuned model using TL.
arXiv Detail & Related papers (2024-09-30T17:51:54Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - EMGTFNet: Fuzzy Vision Transformer to decode Upperlimb sEMG signals for
Hand Gestures Recognition [0.1611401281366893]
We propose a Vision Transformer (ViT) based architecture with a Fuzzy Neural Block (FNB) called EMGTFNet to perform Hand Gesture Recognition.
The accuracy of the proposed model is tested using the publicly available NinaPro database consisting of 49 different hand gestures.
arXiv Detail & Related papers (2023-09-23T18:55:26Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Large Scale Radio Frequency Wideband Signal Detection & Recognition [0.0]
We introduce the WidebandSig53 dataset which consists of 550 thousand synthetically-generated samples from 53 different signal classes.
We extend the TorchSig signal processing machine learning toolkit for open-source and customizable generation, augmentation, and processing of the WBSig53 dataset.
arXiv Detail & Related papers (2022-11-04T13:24:53Z) - Self-Supervised RF Signal Representation Learning for NextG Signal
Classification with Deep Learning [5.624291722263331]
Self-supervised learning enables the learning of useful representations from Radio Frequency (RF) signals themselves.
We show that the sample efficiency (the number of labeled samples required to achieve a certain accuracy performance) of AMR can be significantly increased by learning signal representations with self-supervised learning.
arXiv Detail & Related papers (2022-07-07T02:07:03Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Vision Transformers are Robust Learners [65.91359312429147]
We study the robustness of the Vision Transformer (ViT) against common corruptions and perturbations, distribution shifts, and natural adversarial examples.
We present analyses that provide both quantitative and qualitative indications to explain why ViTs are indeed more robust learners.
arXiv Detail & Related papers (2021-05-17T02:39:22Z) - Surgical Mask Detection with Convolutional Neural Networks and Data
Augmentations on Spectrograms [8.747840760772268]
We show the impact of data augmentation on the binary classification task of surgical mask detection in samples of human voice.
Results show that most of the baselines given by ComParE are outperformed.
arXiv Detail & Related papers (2020-08-11T09:02:47Z) - A Big Data Enabled Channel Model for 5G Wireless Communication Systems [71.93009775340234]
This paper investigates various applications of big data analytics, especially machine learning algorithms in wireless communications and channel modeling.
We propose a big data and machine learning enabled wireless channel model framework.
The proposed channel model is based on artificial neural networks (ANNs), including feed-forward neural network (FNN) and radial basis function neural network (RBF-NN)
arXiv Detail & Related papers (2020-02-28T05:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.