The Mechanical Neural Network(MNN) -- A physical implementation of a
multilayer perceptron for education and hands-on experimentation
- URL: http://arxiv.org/abs/2207.07482v1
- Date: Fri, 15 Jul 2022 14:05:44 GMT
- Title: The Mechanical Neural Network(MNN) -- A physical implementation of a
multilayer perceptron for education and hands-on experimentation
- Authors: Axel Schaffland
- Abstract summary: This model is used in education to give a hands on experience and allow students to experience the effect of changing the parameters of the network on the output.
The MNN can model real valued functions and logical operators including XOR.
- Score: 1.1802674324027231
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper the Mechanical Neural Network(MNN) is introduced, a physical
implementation of a multilayer perceptron(MLP) with ReLU activation functions,
two input neurons, four hidden neurons and two output neurons. This physical
model of a MLP is used in education to give a hands on experience and allow
students to experience the effect of changing the parameters of the network on
the output. Neurons are small wooden levers which are connected by threads.
Students can adapt the weights between the neurons by moving the clamps
connecting a neuron via a thread to the next. The MNN can model real valued
functions and logical operators including XOR.
Related papers
- State-Space Model Inspired Multiple-Input Multiple-Output Spiking Neurons [3.2443914909457594]
In spiking neural networks (SNNs), the main unit of information processing is the neuron with an internal state.
We propose a general multiple-input multiple-output (MIMO) spiking neuron model.
We show that for SNNs with a small number of neurons with large internal state spaces, significant performance gains may be obtained by increasing the number of output channels of a neuron.
arXiv Detail & Related papers (2025-04-03T13:55:11Z) - A survey on learning models of spiking neural membrane systems and spiking neural networks [0.0]
Spiking neural networks (SNN) are a biologically inspired model of neural networks with certain brain-like properties.
In SNN, communication between neurons takes place through the spikes and spike trains.
SNPS can be considered a branch of SNN based more on the principles of formal automata.
arXiv Detail & Related papers (2024-03-27T14:26:41Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Deep Pulse-Coupled Neural Networks [31.65350290424234]
Neural Networks (SNNs) capture the information processing mechanism of the brain by taking advantage of neurons.
In this work, we leverage a more biologically plausible neural model with complex dynamics, i.e., a pulse-coupled neural network (PCNN)
We construct deep pulse-coupled neural networks (DPCNNs) by replacing commonly used LIF neurons in SNNs with PCNN neurons.
arXiv Detail & Related papers (2023-12-24T08:26:00Z) - Co-learning synaptic delays, weights and adaptation in spiking neural
networks [0.0]
Spiking neural networks (SNN) distinguish themselves from artificial neural networks (ANN) because of their inherent temporal processing and spike-based computations.
We show that data processing with spiking neurons can be enhanced by co-learning the connection weights with two other biologically inspired neuronal features.
arXiv Detail & Related papers (2023-09-12T09:13:26Z) - Toward stochastic neural computing [11.955322183964201]
We propose a theory of neural computing in which streams of noisy inputs are transformed and processed through populations of spiking neurons.
We demonstrate the application of our method to Intel's Loihi neuromorphic hardware.
arXiv Detail & Related papers (2023-05-23T12:05:35Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.