Hybrid Magnonic Reservoir Computing
- URL: http://arxiv.org/abs/2405.09542v1
- Date: Thu, 25 Apr 2024 18:21:43 GMT
- Title: Hybrid Magnonic Reservoir Computing
- Authors: Cliff B. Abbott, Dmytro A. Bozhko,
- Abstract summary: We build on an established design for using an Auto-Oscillation Ring as a reservoir computer.
We show that the designs are capable of performing on various real world data sets comparably or better than traditional dense neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnonic systems have been a major area of research interest due to their potential benefits in speed and lower power consumption compared to traditional computing. One particular area that they may be of advantage is as Physical Reservoir Computers in machine learning models. In this work, we build on an established design for using an Auto-Oscillation Ring as a reservoir computer by introducing a simple neural network midstream and introduce an additional design using a spin wave guide with a scattering regime for processing data with different types of inputs. We simulate these designs on the new micro magnetic simulation software, Magnum.np, and show that the designs are capable of performing on various real world data sets comparably or better than traditional dense neural networks.
Related papers
- A Realistic Simulation Framework for Analog/Digital Neuromorphic Architectures [73.65190161312555]
ARCANA is a spiking neural network simulator designed to account for the properties of mixed-signal neuromorphic circuits.
We show how the results obtained provide a reliable estimate of the behavior of the spiking neural network trained in software.
arXiv Detail & Related papers (2024-09-23T11:16:46Z) - SpiNNaker2: A Large-Scale Neuromorphic System for Event-Based and
Asynchronous Machine Learning [12.300710699791418]
SpiNNaker2 is a digital neuromorphic chip developed for scalable machine learning.
This work features the operating principles of SpiNNaker2 systems, outlining the prototype of novel machine learning applications.
arXiv Detail & Related papers (2024-01-09T11:07:48Z) - A Physics-Informed Neural Network to Model Port Channels [0.09830751917335563]
PINN models aim to combine the knowledge of physical systems and data-driven machine learning models.
First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods.
Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost.
arXiv Detail & Related papers (2022-12-20T22:53:19Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Building Reservoir Computing Hardware Using Low Energy-Barrier Magnetics [0.0]
Biologically recurrent neural networks, such as reservoir computers are of interest from a hardware point of view due to the simple learning scheme and deep connections to Kalman filters.
Compact implementation of reservoir computers using such devices may enable building compact, energy-efficient signal processors for standalone or insitu machine cognition in edge devices.
arXiv Detail & Related papers (2020-07-06T14:11:45Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Reservoir Computing with Planar Nanomagnet Arrays [58.40902139823252]
Planar nanomagnet reservoirs are a promising new solution to the growing need for dedicated neuromorphic hardware.
Planar nanomagnet reservoirs are a promising new solution to the growing need for dedicated neuromorphic hardware.
arXiv Detail & Related papers (2020-03-24T16:25:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.