A note on the complex and bicomplex valued neural networks
- URL: http://arxiv.org/abs/2202.02354v1
- Date: Fri, 4 Feb 2022 19:25:01 GMT
- Title: A note on the complex and bicomplex valued neural networks
- Authors: Daniel Alpay and Kamal Diki and Mihaela Vajiac
- Abstract summary: We first write a proof of the perceptron convergence algorithm for the complex multivalued neural networks (CMVNNs)
Our primary goal is to formulate and prove the perceptron convergence algorithm for the bicomplex multivalued neural networks (BMVNNs)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we first write a proof of the perceptron convergence algorithm
for the complex multivalued neural networks (CMVNNs). Our primary goal is to
formulate and prove the perceptron convergence algorithm for the bicomplex
multivalued neural networks (BMVNNs) and other important results in the theory
of neural networks based on a bicomplex algebra.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Evidence, Definitions and Algorithms regarding the Existence of
Cohesive-Convergence Groups in Neural Network Optimization [0.0]
Understanding the process of neural networks is one of the most complex and crucial issues in the field of machine learning.
This paper focuses on the theoretical convergence of artificial neural networks.
arXiv Detail & Related papers (2024-03-08T13:23:42Z) - Universal Approximation Theorem for Vector- and Hypercomplex-Valued Neural Networks [0.3686808512438362]
The universal approximation theorem states that a neural network with one hidden layer can approximate continuous functions on compact sets.
It is valid for real-valued neural networks and some hypercomplex-valued neural networks.
arXiv Detail & Related papers (2024-01-04T13:56:13Z) - On the Approximation and Complexity of Deep Neural Networks to Invariant
Functions [0.0]
We study the approximation and complexity of deep neural networks to invariant functions.
We show that a broad range of invariant functions can be approximated by various types of neural network models.
We provide a feasible application that connects the parameter estimation and forecasting of high-resolution signals with our theoretical conclusions.
arXiv Detail & Related papers (2022-10-27T09:19:19Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Extending the Universal Approximation Theorem for a Broad Class of
Hypercomplex-Valued Neural Networks [1.0323063834827413]
The universal approximation theorem asserts that a single hidden layer neural network approximates continuous functions with any desired precision on compact sets.
This paper extends the universal approximation theorem for a broad class of hypercomplex-valued neural networks.
arXiv Detail & Related papers (2022-09-06T12:45:15Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Development and Training of Quantum Neural Networks, Based on the
Principles of Grover's Algorithm [0.0]
This paper proposes the concept of combining the training process of a neural network with the functional structure of that neural network, interpreted as a quantum circuit.
As a simple example of a neural network, to showcase the concept, a perceptron with one trainable parameter - the weight of a synapse connected to a hidden neuron.
arXiv Detail & Related papers (2021-10-01T14:08:43Z) - Towards Understanding Theoretical Advantages of Complex-Reaction
Networks [77.34726150561087]
We show that a class of functions can be approximated by a complex-reaction network using the number of parameters.
For empirical risk minimization, our theoretical result shows that the critical point set of complex-reaction networks is a proper subset of that of real-valued networks.
arXiv Detail & Related papers (2021-08-15T10:13:49Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.