Protein Structured Reservoir computing for Spike-based Pattern
Recognition
- URL: http://arxiv.org/abs/2008.03330v2
- Date: Mon, 29 Mar 2021 08:09:06 GMT
- Title: Protein Structured Reservoir computing for Spike-based Pattern
Recognition
- Authors: Karolos-Alexandros Tsakalos, Georgios Ch. Sirakoulis, Andrew
Adamatzky, Jim Smith
- Abstract summary: We implement a reservoir computing on a single protein molecule and introduce neuromorphic connectivity with a small-world networking property.
We apply on a single readout layer various training methods in a supervised fashion to investigate whether the molecular structured Reservoir Computing system is capable to deal with machine learning benchmarks.
The RC network is evaluated as a proof-of-concept on the handwritten digit images from the MNIST dataset and demonstrates acceptable classification accuracy in comparison with other similar approaches.
- Score: 0.37798600249187286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays we witness a miniaturisation trend in the semiconductor industry
backed up by groundbreaking discoveries and designs in nanoscale
characterisation and fabrication. To facilitate the trend and produce ever
smaller, faster and cheaper computing devices, the size of nanoelectronic
devices is now reaching the scale of atoms or molecules - a technical goal
undoubtedly demanding for novel devices. Following the trend, we explore an
unconventional route of implementing a reservoir computing on a single protein
molecule and introduce neuromorphic connectivity with a small-world networking
property. We have chosen Izhikevich spiking neurons as elementary processors,
corresponding to the atoms of verotoxin protein, and its molecule as a
'hardware' architecture of the communication networks connecting the
processors. We apply on a single readout layer various training methods in a
supervised fashion to investigate whether the molecular structured Reservoir
Computing (RC) system is capable to deal with machine learning benchmarks. We
start with the Remote Supervised Method, based on
Spike-Timing-Dependent-Plasticity, and carry on with linear regression and
scaled conjugate gradient back-propagation training methods. The RC network is
evaluated as a proof-of-concept on the handwritten digit images from the MNIST
dataset and demonstrates acceptable classification accuracy in comparison with
other similar approaches.
Related papers
- Event-Driven Implementation of a Physical Reservoir Computing Framework for superficial EMG-based Gesture Recognition [2.222098162797332]
This paper explores a novel neuromorphic implementation approach for gesture recognition by extracting spiking information from surface electromyography (sEMG) data in an event-driven manner.
The network was designed by implementing a simple-structured and hardware-friendly Physical Reservoir Computing framework called Rotating Neuron Reservoir (RNR) within the domain of Spiking neural network (SNN)
The proposed system was validated by an open-access large-scale sEMG database and achieved an average classification accuracy of 74.6% and 80.3%.
arXiv Detail & Related papers (2025-03-10T17:18:14Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained
Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence [4.998666322418252]
We envision a pipeline to utilize large neuroimaging datasets, including maps of the brain.
We have developed a technique for discovery of repeated subcircuits, or motifs.
Third, the team analyzed circuitry for memory formation in the fruit fly connectome, enabling the design of a novel generative replay approach.
arXiv Detail & Related papers (2023-05-26T23:04:53Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Spike-based local synaptic plasticity: A survey of computational models
and neuromorphic circuits [1.8464222520424338]
We review historical, bottom-up, and top-down approaches to modeling synaptic plasticity.
We identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules.
arXiv Detail & Related papers (2022-09-30T15:35:04Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - A photonic chip-based machine learning approach for the prediction of
molecular properties [11.55177943027656]
Photonic chip technology offers an alternative platform for implementing neural network with faster data processing and lower energy usage.
We demonstrate the capability of photonic neural networks in predicting the quantum mechanical properties of molecules.
Our work opens the avenue for harnessing photonic technology for large-scale machine learning applications in molecular sciences.
arXiv Detail & Related papers (2022-03-03T03:15:14Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - An error-propagation spiking neural network compatible with neuromorphic
processors [2.432141667343098]
We present a spike-based learning method that approximates back-propagation using local weight update mechanisms.
We introduce a network architecture that enables synaptic weight update mechanisms to back-propagate error signals.
This work represents a first step towards the design of ultra-low power mixed-signal neuromorphic processing systems.
arXiv Detail & Related papers (2021-04-12T07:21:08Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.