An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device
Classification
- URL: http://arxiv.org/abs/2202.09777v1
- Date: Sun, 20 Feb 2022 10:35:20 GMT
- Title: An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device
Classification
- Authors: Jun Chen, Weng-Keen Wong, Bechir Hamdaoui, Abdurrahman Elmaghbub,
Kathiravetpillai Sivanesan, Richard Dorrance, Lily L. Yang
- Abstract summary: Recent deep neural network-based device classification studies show that complex-valued neural networks (CVNNs) yield higher classification accuracy than real-valued neural networks (RVNNs)
Our study provides a deeper understanding of this trend using real LoRa and WiFi RF datasets.
- Score: 12.810432378755904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent deep neural network-based device classification studies show that
complex-valued neural networks (CVNNs) yield higher classification accuracy
than real-valued neural networks (RVNNs). Although this improvement is
(intuitively) attributed to the complex nature of the input RF data (i.e., IQ
symbols), no prior work has taken a closer look into analyzing such a trend in
the context of wireless device identification. Our study provides a deeper
understanding of this trend using real LoRa and WiFi RF datasets. We perform a
deep dive into understanding the impact of (i) the input representation/type
and (ii) the architectural layer of the neural network. For the input
representation, we considered the IQ as well as the polar coordinates both
partially and fully. For the architectural layer, we considered a series of
ablation experiments that eliminate parts of the CVNN components. Our results
show that CVNNs consistently outperform RVNNs counterpart in the various
scenarios mentioned above, indicating that CVNNs are able to make better use of
the joint information provided via the in-phase (I) and quadrature (Q)
components of the signal.
Related papers
- Steinmetz Neural Networks for Complex-Valued Data [23.80312814400945]
We introduce a new approach to processing complex-valued data using DNNs consisting of parallel real-valuedetzworks with coupled outputs.
Our proposed class of architectures, referred to as Steinmetz Neural Networks, leverage multi-view learning to construct more interpretable representations within the latent space.
Our numerical experiments depict the improved performance and to additive noise, afforded by these networks on benchmark datasets and synthetic examples.
arXiv Detail & Related papers (2024-09-16T08:26:06Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Heterogeneous Recurrent Spiking Neural Network for Spatio-Temporal
Classification [13.521272923545409]
Spi Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence.
This paper presents a heterogeneous spiking neural network (HRSNN) with unsupervised learning for video recognition tasks.
We show that HRSNN can achieve similar performance to state-of-the-temporal backpropagation trained supervised SNN, but with less computation.
arXiv Detail & Related papers (2022-09-22T16:34:01Z) - Bayesian Convolutional Neural Networks for Limited Data Hyperspectral
Remote Sensing Image Classification [14.464344312441582]
We use a special class of deep neural networks, namely Bayesian neural network, to classify HSRS images.
Bayesian neural networks provide an inherent tool for measuring uncertainty.
We show that a Bayesian network can outperform a similarly-constructed non-Bayesian convolutional neural network (CNN) and an off-the-shelf Random Forest (RF)
arXiv Detail & Related papers (2022-05-19T00:02:16Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Complex-Valued vs. Real-Valued Neural Networks for Classification
Perspectives: An Example on Non-Circular Data [10.06162739966462]
We show the potential interest of Complex-Valued Neural Network (CVNN) on classification tasks for complex-valued datasets.
CVNN accuracy presents a statistically higher mean and median variance and lower variance than Real-Valued Neural Network (RVNN)
arXiv Detail & Related papers (2020-09-17T14:39:35Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.