Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks
- URL: http://arxiv.org/abs/2303.03326v1
- Date: Mon, 6 Mar 2023 17:53:42 GMT
- Title: Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks
- Authors: Taiwo Oyedare, Vijay K. Shah, Daniel J. Jakubisin, Jeffrey H. Reed
- Abstract summary: We study the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty.
Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model.
- Score: 7.358050500046429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing number of devices using the wireless spectrum makes it important
to find ways to minimize interference and optimize the use of the spectrum.
Deep learning models, such as convolutional neural networks (CNNs), have been
widely utilized to identify, classify, or mitigate interference due to their
ability to learn from the data directly. However, there have been limited
research on the complexity of such deep learning models. The major focus of
deep learning-based wireless classification literature has been on improving
classification accuracy, often at the expense of model complexity. This may not
be practical for many wireless devices, such as, internet of things (IoT)
devices, which usually have very limited computational resources and cannot
handle very complex models. Thus, it becomes important to account for model
complexity when designing deep learning-based models for interference
classification. To address this, we conduct an analysis of CNN based wireless
classification that explores the trade-off amongst dataset size, CNN model
complexity, and classification accuracy under various levels of classification
difficulty: namely, interference classification, heterogeneous transmitter
classification, and homogeneous transmitter classification. Our study, based on
three wireless datasets, shows that a simpler CNN model with fewer parameters
can perform just as well as a more complex model, providing important insights
into the use of CNNs in computationally constrained applications.
Related papers
- Multi-Scale Convolutional LSTM with Transfer Learning for Anomaly Detection in Cellular Networks [1.1432909951914676]
This study introduces a novel approach Multi-Scale Convolutional LSTM with Transfer Learning (TL) to detect anomalies in cellular networks.
The model is initially trained from scratch using a publicly available dataset to learn typical network behavior.
We compare the performance of the model trained from scratch with that of the fine-tuned model using TL.
arXiv Detail & Related papers (2024-09-30T17:51:54Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Go Beyond Multiple Instance Neural Networks: Deep-learning Models based
on Local Pattern Aggregation [0.0]
convolutional neural networks (CNNs) have brought breakthroughs in processing clinical electrocardiograms (ECGs) and speaker-independent speech.
In this paper, we propose local pattern aggregation-based deep-learning models to effectively deal with both problems.
The novel network structure, called LPANet, has cropping and aggregation operations embedded into it.
arXiv Detail & Related papers (2022-05-28T13:18:18Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Animal Behavior Classification via Accelerometry Data and Recurrent
Neural Networks [11.099308746733028]
We study the classification of animal behavior using accelerometry data through various recurrent neural network (RNN) models.
We evaluate the classification performance and complexity of the considered models.
We also include two state-of-the-art convolutional neural network (CNN)-based time-series classification models in the evaluations.
arXiv Detail & Related papers (2021-11-24T23:28:25Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Machine Learning Based Mobile Network Throughput Classification [5.256160002566292]
This paper proposes a data driven model for identifying 4G cells that have fundamental network throughput problems.
Model parameters are learnt using a small number of expert-labeled data.
Experiments show that the proposed model outperforms a simple classifier in identifying cells with network throughput problems.
arXiv Detail & Related papers (2020-04-27T20:08:06Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.