On the Computational Complexities of Complex-valued Neural Networks
- URL: http://arxiv.org/abs/2310.13075v1
- Date: Thu, 19 Oct 2023 18:14:04 GMT
- Title: On the Computational Complexities of Complex-valued Neural Networks
- Authors: Kayol Soares Mayer, Jonathan Aguiar Soares, Ariadne Arrais Cruz,
Dalton Soares Arantes
- Abstract summary: Complex-valued neural networks (CVNNs) are nonlinear filters used in the digital signal processing of complex-domain data.
This paper presents both the quantitative and computational complexities of CVNNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex-valued neural networks (CVNNs) are nonlinear filters used in the
digital signal processing of complex-domain data. Compared with real-valued
neural networks~(RVNNs), CVNNs can directly handle complex-valued input and
output signals due to their complex domain parameters and activation functions.
With the trend toward low-power systems, computational complexity analysis has
become essential for measuring an algorithm's power consumption. Therefore,
this paper presents both the quantitative and asymptotic computational
complexities of CVNNs. This is a crucial tool in deciding which algorithm to
implement. The mathematical operations are described in terms of the number of
real-valued multiplications, as these are the most demanding operations. To
determine which CVNN can be implemented in a low-power system, quantitative
computational complexities can be used to accurately estimate the number of
floating-point operations. We have also investigated the computational
complexities of CVNNs discussed in some studies presented in the literature.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Comprehensive Survey of Complex-Valued Neural Networks: Insights into Backpropagation and Activation Functions [0.0]
Despite the prevailing use of real-number implementations in current ANN frameworks, there is a growing interest in developing ANNs that utilize complex numbers.
This paper presents a survey of recent advancements in complex-valued neural networks (CVNNs)
We delve into the extension of the backpropagation algorithm to the complex domain, which enables the training of neural networks with complex-valued inputs, weights, AFs, and outputs.
arXiv Detail & Related papers (2024-07-27T13:47:16Z) - Complex-valued Neural Networks -- Theory and Analysis [0.0]
This work addresses different structures and classification of CVNNs.
The theory behind complex activation functions, implications related to complex differentiability and special activations for CVNN output layers are presented.
The objective of this work is to understand the dynamics and most recent developments of CVNNs.
arXiv Detail & Related papers (2023-12-11T03:24:26Z) - An exact mathematical description of computation with transient
spatiotemporal dynamics in a complex-valued neural network [33.7054351451505]
We study a complex-valued neural network (-NN) with linear time-delayed interactions.
cv-NN displays sophisticated dynamics, including partially synchronized chimera adaptable'' states.
We demonstrate that computations in cv-NN computation are decodable by living biological neurons.
arXiv Detail & Related papers (2023-11-28T02:23:30Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Computational Complexity Evaluation of Neural Network Applications in
Signal Processing [3.4656382116457767]
We provide a systematic approach for assessing and comparing the computational complexity of neural network layers in digital signal processing.
One of the four metrics, called the number of additions and bit shifts ( NABS)', is newly introduced for heterogeneous quantization.
arXiv Detail & Related papers (2022-06-24T10:02:02Z) - Spectral Complexity-scaled Generalization Bound of Complex-valued Neural
Networks [78.64167379726163]
This paper is the first work that proves a generalization bound for the complex-valued neural network.
We conduct experiments by training complex-valued convolutional neural networks on different datasets.
arXiv Detail & Related papers (2021-12-07T03:25:25Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - A Survey of Complex-Valued Neural Networks [4.211128681972148]
Artificial neural networks (ANNs) based machine learning models have been widely applied in computer vision, signal processing, wireless communications, and many other domains.
Most of the current implementations of ANNs and machine learning frameworks are using real numbers rather than complex numbers.
There are growing interests in building ANNs using complex numbers, and exploring the potential advantages of the so-called complex-valued neural networks (CVNNs) over their real-valued counterparts.
arXiv Detail & Related papers (2021-01-28T19:40:50Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.