High-Capacity Complex Convolutional Neural Networks For I/Q Modulation
Classification
- URL: http://arxiv.org/abs/2010.10717v1
- Date: Wed, 21 Oct 2020 02:26:24 GMT
- Title: High-Capacity Complex Convolutional Neural Networks For I/Q Modulation
Classification
- Authors: Jakob Krzyston, Rajib Bhattacharjea, Andrew Stark
- Abstract summary: We claim state of the art performance by enabling high-capacity architectures containing residual and/or dense connections to compute complex-valued convolutions.
We show statistically significant improvements in all networks with complex convolutions for I/Q modulation classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: I/Q modulation classification is a unique pattern recognition problem as the
data for each class varies in quality, quantified by signal to noise ratio
(SNR), and has structure in the complex-plane. Previous work shows treating
these samples as complex-valued signals and computing complex-valued
convolutions within deep learning frameworks significantly increases the
performance over comparable shallow CNN architectures. In this work, we claim
state of the art performance by enabling high-capacity architectures containing
residual and/or dense connections to compute complex-valued convolutions, with
peak classification accuracy of 92.4% on a benchmark classification problem,
the RadioML 2016.10a dataset. We show statistically significant improvements in
all networks with complex convolutions for I/Q modulation classification.
Complexity and inference speed analyses show models with complex convolutions
substantially outperform architectures with a comparable number of parameters
and comparable speed by over 10% in each case.
Related papers
- On Characterizing the Evolution of Embedding Space of Neural Networks
using Algebraic Topology [9.537910170141467]
We study how the topology of feature embedding space changes as it passes through the layers of a well-trained deep neural network (DNN) through Betti numbers.
We demonstrate that as depth increases, a topologically complicated dataset is transformed into a simple one, resulting in Betti numbers attaining their lowest possible value.
arXiv Detail & Related papers (2023-11-08T10:45:12Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Dataset Complexity Assessment Based on Cumulative Maximum Scaled Area
Under Laplacian Spectrum [38.65823547986758]
It is meaningful to predict classification performance by assessing the complexity of datasets effectively before training DCNN models.
This paper proposes a novel method called cumulative maximum scaled Area Under Laplacian Spectrum (cmsAULS)
arXiv Detail & Related papers (2022-09-29T13:02:04Z) - Adversarial Audio Synthesis with Complex-valued Polynomial Networks [60.231877895663956]
Time-frequency (TF) representations in audio have been increasingly modeled real-valued networks.
We introduce complex-valued networks called APOLLO, that integrate such complex-valued representations in a natural way.
APOLLO results in $17.5%$ improvement over adversarial methods and $8.2%$ over the state-of-the-art diffusion models on SC09 in audio generation.
arXiv Detail & Related papers (2022-06-14T12:58:59Z) - Animal Behavior Classification via Accelerometry Data and Recurrent
Neural Networks [11.099308746733028]
We study the classification of animal behavior using accelerometry data through various recurrent neural network (RNN) models.
We evaluate the classification performance and complexity of the considered models.
We also include two state-of-the-art convolutional neural network (CNN)-based time-series classification models in the evaluations.
arXiv Detail & Related papers (2021-11-24T23:28:25Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Modulation Pattern Detection Using Complex Convolutions in Deep Learning [0.0]
Classifying modulation patterns is challenging because noise and channel impairments affect the signals.
We study the implementation and use of complex convolutions in a series of convolutional neural network architectures.
arXiv Detail & Related papers (2020-10-14T02:43:11Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for
Optical Flow Estimation [72.41370576242116]
We propose a semi-supervised Feature Pyramidal Correlation and Residual Reconstruction Network (FPCR-Net) for optical flow estimation from frame pairs.
It consists of two main modules: pyramid correlation mapping and residual reconstruction.
Experiment results show that the proposed scheme achieves the state-of-the-art performance, with improvement by 0.80, 1.15 and 0.10 in terms of average end-point error (AEE) against competing baseline methods.
arXiv Detail & Related papers (2020-01-17T07:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.