Complex-Valued vs. Real-Valued Neural Networks for Classification
Perspectives: An Example on Non-Circular Data
- URL: http://arxiv.org/abs/2009.08340v2
- Date: Tue, 13 Apr 2021 12:20:21 GMT
- Title: Complex-Valued vs. Real-Valued Neural Networks for Classification
Perspectives: An Example on Non-Circular Data
- Authors: Jose Agustin Barrachina and Chenfang Ren and Christele Morisseau and
Gilles Vieillard and Jean-Philippe Ovarlez
- Abstract summary: We show the potential interest of Complex-Valued Neural Network (CVNN) on classification tasks for complex-valued datasets.
CVNN accuracy presents a statistically higher mean and median variance and lower variance than Real-Valued Neural Network (RVNN)
- Score: 10.06162739966462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The contributions of this paper are twofold. First, we show the potential
interest of Complex-Valued Neural Network (CVNN) on classification tasks for
complex-valued datasets. To highlight this assertion, we investigate an example
of complex-valued data in which the real and imaginary parts are statistically
dependent through the property of non-circularity. In this context, the
performance of fully connected feed-forward CVNNs is compared against a
real-valued equivalent model. The results show that CVNN performs better for a
wide variety of architectures and data structures. CVNN accuracy presents a
statistically higher mean and median and lower variance than Real-Valued Neural
Network (RVNN). Furthermore, if no regularization technique is used, CVNN
exhibits lower overfitting. The second contribution is the release of a Python
library (Barrachina 2019) using Tensorflow as back-end that enables the
implementation and training of CVNNs in the hopes of motivating further
research on this area.
Related papers
- Statistical Properties of Deep Neural Networks with Dependent Data [0.0]
This paper establishes statistical properties of deep neural network (DNN) estimators under dependent data.
The framework provided also offers potential for research into other DNN architectures and time-series applications.
arXiv Detail & Related papers (2024-10-14T21:46:57Z) - Complex Network for Complex Problems: A comparative study of CNN and
Complex-valued CNN [0.0]
Complex-valued convolutional neural networks (CV-CNN) can preserve the algebraic structure of complex-valued input data.
CV-CNNs have double the number of trainable parameters as real-valued CNNs in terms of the actual number of trainable parameters.
This paper presents a comparative study of CNN, CNNx2 (CNN with double the number of trainable parameters as the CNN), and CV-CNN.
arXiv Detail & Related papers (2023-02-09T11:51:46Z) - Impact of PolSAR pre-processing and balancing methods on complex-valued
neural networks segmentation tasks [9.6556424340252]
We investigate the semantic segmentation of Polarimetric Synthetic Aperture Radar (PolSAR) using Complex-Valued Neural Network (CVNN)
We exhaustively compare both methods for six model architectures, three complex-valued, and their respective real-equivalent models.
We propose two methods for reducing this gap and performing the results for all input representations, models, and dataset pre-processing.
arXiv Detail & Related papers (2022-10-28T12:49:43Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Integrating Random Effects in Deep Neural Networks [4.860671253873579]
We propose to use the mixed models framework to handle correlated data in deep neural networks.
By treating the effects underlying the correlation structure as random effects, mixed models are able to avoid overfitted parameter estimates.
Our approach which we call LMMNN is demonstrated to improve performance over natural competitors in various correlation scenarios.
arXiv Detail & Related papers (2022-06-07T14:02:24Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device
Classification [12.810432378755904]
Recent deep neural network-based device classification studies show that complex-valued neural networks (CVNNs) yield higher classification accuracy than real-valued neural networks (RVNNs)
Our study provides a deeper understanding of this trend using real LoRa and WiFi RF datasets.
arXiv Detail & Related papers (2022-02-20T10:35:20Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.