Neural Computing with Coherent Laser Networks
- URL: http://arxiv.org/abs/2204.02224v1
- Date: Tue, 5 Apr 2022 13:56:34 GMT
- Title: Neural Computing with Coherent Laser Networks
- Authors: Mohammad-Ali Miri, and Vinod Menon
- Abstract summary: We show that a coherent network of lasers exhibits emergent neural computing capabilities.
A novel energy-based recurrent neural network handles continuous data as opposed to Hopfield networks and Boltzmann machines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show that a coherent network of lasers exhibits emergent neural computing
capabilities. The proposed scheme is built on harnessing the collective
behavior of laser networks for storing a number of phase patterns as stable
fixed points of the governing dynamical equations and retrieving such patterns
through proper excitation conditions, thus exhibiting an associative memory
property. The associative memory functionality is first discussed in the strong
pumping regime of a network of passive dissipatively coupled lasers which
simulate the classical XY model. It is discussed that despite the large storage
capacity of the network, the large overlap between fixed-point patterns
effectively limits pattern retrieval to only two images. Next, we show that
this restriction can be uplifted by using nonreciprocal coupling between lasers
and this allows for utilizing a large storage capacity. This work opens new
possibilities for neural computation with coherent laser networks as novel
analog processors. In addition, the underlying dynamical model discussed here
suggests a novel energy-based recurrent neural network that handles continuous
data as opposed to Hopfield networks and Boltzmann machines which are
intrinsically binary systems.
Related papers
- Dense Associative Memory Through the Lens of Random Features [48.17520168244209]
Dense Associative Memories are high storage capacity variants of the Hopfield networks.
We show that this novel network closely approximates the energy function and dynamics of conventional Dense Associative Memories.
arXiv Detail & Related papers (2024-10-31T17:10:57Z) - Explosive neural networks via higher-order interactions in curved statistical manifolds [43.496401697112695]
We introduce curved neural networks as a class of prototypical models for studying higher-order phenomena.
We show that these curved neural networks implement a self-regulating process that can accelerate memory retrieval.
arXiv Detail & Related papers (2024-08-05T09:10:29Z) - Approximating nonlinear functions with latent boundaries in low-rank
excitatory-inhibitory spiking networks [5.955727366271805]
We put forth a new framework for spike-based excitatory-inhibitory spiking networks.
Our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
arXiv Detail & Related papers (2023-07-18T15:17:00Z) - Long Sequence Hopfield Memory [32.28395813801847]
Sequence memory enables agents to encode, store, and retrieve complex sequences of stimuli and actions.
We introduce a nonlinear interaction term, enhancing separation between the patterns.
We extend this model to store sequences with variable timing between states' transitions.
arXiv Detail & Related papers (2023-06-07T15:41:03Z) - Exploring the Approximation Capabilities of Multiplicative Neural
Networks for Smooth Functions [9.936974568429173]
We consider two classes of target functions: generalized bandlimited functions and Sobolev-Type balls.
Our results demonstrate that multiplicative neural networks can approximate these functions with significantly fewer layers and neurons.
These findings suggest that multiplicative gates can outperform standard feed-forward layers and have potential for improving neural network design.
arXiv Detail & Related papers (2023-01-11T17:57:33Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Kernel Memory Networks: A Unifying Framework for Memory Modeling [9.142894972380216]
We consider the problem of training a neural network to store a set of patterns with maximal noise robustness.
A solution is derived by training each individual neuron to perform either kernel classification or with a minimum weight norm.
We derive optimal models, termed kernel memory networks, that include, as special cases, many of the hetero- and auto-associative memory models.
arXiv Detail & Related papers (2022-08-19T16:01:09Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.