Stability Analysis of Fractional Order Memristor Synapse-coupled
Hopfield Neural Network with Ring Structure
- URL: http://arxiv.org/abs/2109.14383v2
- Date: Wed, 6 Jul 2022 13:56:45 GMT
- Title: Stability Analysis of Fractional Order Memristor Synapse-coupled
Hopfield Neural Network with Ring Structure
- Authors: Leila Eftekhari, Mohammad M. Amirian
- Abstract summary: We first present a fractional-order memristor synapse-coupling Hopfield neural network on two neurons.
We extend the model to a neural network with a ring structure that consists of n sub-network neurons, increasing the synchronization in the network.
In the n-neuron case, it is revealed that the stability depends on the structure and number of sub-networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A memristor is a nonlinear two-terminal electrical element that incorporates
memory features and nanoscale properties, enabling us to design very
high-density artificial neural networks. To enhance the memory property, we
should use mathematical frameworks like fractional calculus, which is capable
of doing so. Here, we first present a fractional-order memristor
synapse-coupling Hopfield neural network on two neurons and then extend the
model to a neural network with a ring structure that consists of n sub-network
neurons, increasing the synchronization in the network. Necessary and
sufficient conditions for the stability of equilibrium points are investigated,
highlighting the dependency of the stability on the fractional-order value and
the number of neurons. Numerical simulations and bifurcation analysis, along
with Lyapunov exponents, are given in the two-neuron case that substantiates
the theoretical findings, suggesting possible routes towards chaos when the
fractional order of the system increases. In the n-neuron case also, it is
revealed that the stability depends on the structure and number of
sub-networks.
Related papers
- Stochastic Gradient Descent for Two-layer Neural Networks [2.0349026069285423]
This paper presents a study on the convergence rates of the descent (SGD) algorithm when applied to overparameterized two-layer neural networks.
Our approach combines the Tangent Kernel (NTK) approximation with convergence analysis in the Reproducing Kernel Space (RKHS) generated by NTK.
Our research framework enables us to explore the intricate interplay between kernel methods and optimization processes, shedding light on the dynamics and convergence properties of neural networks.
arXiv Detail & Related papers (2024-07-10T13:58:57Z) - Polariton lattices as binarized neuromorphic networks [0.0]
We introduce a novel neuromorphic network architecture based on a lattice of exciton-polariton condensates, intricately interconnected and energized through non-resonant optical pumping.
The network employs a binary framework, where each neuron, facilitated by the spatial coherence of pairwise coupled condensates, performs binary operations.
The network's performance was evaluated using the MNIST dataset for handwritten digit recognition, showcasing the potential to outperform existing polaritonic neuromorphic systems.
arXiv Detail & Related papers (2024-01-14T08:32:41Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - A Step Towards Uncovering The Structure of Multistable Neural Networks [1.14219428942199]
We study the structure of multistable recurrent neural networks.
The activation function is simplified by a nonsmooth Heaviside step function.
We derive how multistability is encoded within the network architecture.
arXiv Detail & Related papers (2022-10-06T22:54:17Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Linear approximability of two-layer neural networks: A comprehensive
analysis based on spectral decay [4.042159113348107]
We first consider the case of single neuron and show that the linear approximability, quantified by the Kolmogorov width, is controlled by the eigenvalue decay of an associate kernel.
We show that similar results also hold for two-layer neural networks.
arXiv Detail & Related papers (2021-08-10T23:30:29Z) - Geometry Perspective Of Estimating Learning Capability Of Neural
Networks [0.0]
The paper considers a broad class of neural networks with generalized architecture performing simple least square regression with gradient descent (SGD)
The relationship between the generalization capability with the stability of the neural network has also been discussed.
By correlating the principles of high-energy physics with the learning theory of neural networks, the paper establishes a variant of the Complexity-Action conjecture from an artificial neural network perspective.
arXiv Detail & Related papers (2020-11-03T12:03:19Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.