Visual Pattern Recognition with on On-chip Learning: towards a Fully
Neuromorphic Approach
- URL: http://arxiv.org/abs/2008.03470v1
- Date: Sat, 8 Aug 2020 08:07:36 GMT
- Title: Visual Pattern Recognition with on On-chip Learning: towards a Fully
Neuromorphic Approach
- Authors: Sandro Baumgartner, Alpha Renner, Raphaela Kreiser, Dongchen Liang,
Giacomo Indiveri, Yulia Sandamirskaya
- Abstract summary: We present a spiking neural network (SNN) for visual pattern recognition with on-chip learning on neuromorphic hardware.
We show how this network can learn simple visual patterns composed of horizontal and vertical bars sensed by a Dynamic Vision Sensor.
During recognition, the network classifies the pattern's identity while at the same time estimating its location and scale.
- Score: 10.181725314550823
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a spiking neural network (SNN) for visual pattern recognition with
on-chip learning on neuromorphichardware. We show how this network can learn
simple visual patterns composed of horizontal and vertical bars sensed by a
Dynamic Vision Sensor, using a local spike-based plasticity rule. During
recognition, the network classifies the pattern's identity while at the same
time estimating its location and scale. We build on previous work that used
learning with neuromorphic hardware in the loop and demonstrate that the
proposed network can properly operate with on-chip learning, demonstrating a
complete neuromorphic pattern learning and recognition setup. Our results show
that the network is robust against noise on the input (no accuracy drop when
adding 130% noise) and against up to 20% noise in the neuron parameters.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - NeRN -- Learning Neural Representations for Neural Networks [3.7384109981836153]
We show that, when adapted correctly, neural representations can be used to represent the weights of a pre-trained convolutional neural network.
Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network.
We present two applications using NeRN, demonstrating the capabilities of the learned representations.
arXiv Detail & Related papers (2022-12-27T17:14:44Z) - How and what to learn:The modes of machine learning [7.085027463060304]
We propose a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks.
WPA shows that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure.
It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process.
arXiv Detail & Related papers (2022-02-28T14:39:06Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.