Deep Feature Learning for Wireless Spectrum Data
- URL: http://arxiv.org/abs/2308.03530v1
- Date: Mon, 7 Aug 2023 12:27:19 GMT
- Title: Deep Feature Learning for Wireless Spectrum Data
- Authors: Ljupcho Milosheski, Gregor Cerar, Bla\v{z} Bertalani\v{c}, Carolina
Fortuna and Mihael Mohor\v{c}i\v{c}
- Abstract summary: We propose an approach to learning feature representations for wireless transmission clustering in a completely unsupervised manner.
We show that the automatic representation learning is able to extract fine-grained clusters containing the shapes of the wireless transmission bursts.
- Score: 0.5809784853115825
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, the traditional feature engineering process for training
machine learning models is being automated by the feature extraction layers
integrated in deep learning architectures. In wireless networks, many studies
were conducted in automatic learning of feature representations for
domain-related challenges. However, most of the existing works assume some
supervision along the learning process by using labels to optimize the model.
In this paper, we investigate an approach to learning feature representations
for wireless transmission clustering in a completely unsupervised manner, i.e.
requiring no labels in the process. We propose a model based on convolutional
neural networks that automatically learns a reduced dimensionality
representation of the input data with 99.3% less components compared to a
baseline principal component analysis (PCA). We show that the automatic
representation learning is able to extract fine-grained clusters containing the
shapes of the wireless transmission bursts, while the baseline enables only
general separability of the data based on the background noise.
Related papers
- Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks [7.358050500046429]
We study the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty.
Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model.
arXiv Detail & Related papers (2023-03-06T17:53:42Z) - Machine Learning for QoS Prediction in Vehicular Communication:
Challenges and Solution Approaches [46.52224306624461]
We consider maximum throughput prediction enhancing, for example, streaming or high-definition mapping applications.
We highlight how confidence can be built on machine learning technologies by better understanding the underlying characteristics of the collected data.
We use explainable AI to show that machine learning can learn underlying principles of wireless networks without being explicitly programmed.
arXiv Detail & Related papers (2023-02-23T12:29:20Z) - Representation Learning for Appliance Recognition: A Comparison to
Classical Machine Learning [13.063093054280946]
Non-intrusive load monitoring aims at energy consumption and appliance state information retrieval from aggregated consumption measurements.
We show how the NILM processing-chain can be improved, reduced in complexity and alternatively designed with recent deep learning algorithms.
We evaluate all approaches on two large-scale energy consumption datasets with more than 50,000 events of 44 appliances.
arXiv Detail & Related papers (2022-08-26T15:09:20Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Self-supervised Audiovisual Representation Learning for Remote Sensing Data [96.23611272637943]
We propose a self-supervised approach for pre-training deep neural networks in remote sensing.
By exploiting the correspondence between geo-tagged audio recordings and remote sensing, this is done in a completely label-free manner.
We show that our approach outperforms existing pre-training strategies for remote sensing imagery.
arXiv Detail & Related papers (2021-08-02T07:50:50Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Fast accuracy estimation of deep learning based multi-class musical
source separation [79.10962538141445]
We propose a method to evaluate the separability of instruments in any dataset without training and tuning a neural network.
Based on the oracle principle with an ideal ratio mask, our approach is an excellent proxy to estimate the separation performances of state-of-the-art deep learning approaches.
arXiv Detail & Related papers (2020-10-19T13:05:08Z) - Federated Self-Supervised Learning of Multi-Sensor Representations for
Embedded Intelligence [8.110949636804772]
Smartphones, wearables, and Internet of Things (IoT) devices produce a wealth of data that cannot be accumulated in a centralized repository for learning supervised models.
We propose a self-supervised approach termed textitscalogram-signal correspondence learning based on wavelet transform to learn useful representations from unlabeled sensor inputs.
We extensively assess the quality of learned features with our multi-view strategy on diverse public datasets, achieving strong performance in all domains.
arXiv Detail & Related papers (2020-07-25T21:59:17Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.