Hyperdimensional Computing with Spiking-Phasor Neurons
- URL: http://arxiv.org/abs/2303.00066v1
- Date: Tue, 28 Feb 2023 20:09:12 GMT
- Title: Hyperdimensional Computing with Spiking-Phasor Neurons
- Authors: Jeff Orchard, Russell Jarvis
- Abstract summary: Symbolic Vector Architectures (VSAs) are a powerful framework for representing compositional reasoning.
We run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
- Score: 0.9594432031144714
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Vector Symbolic Architectures (VSAs) are a powerful framework for
representing compositional reasoning. They lend themselves to neural-network
implementations, allowing us to create neural networks that can perform
cognitive functions, like spatial reasoning, arithmetic, symbol binding, and
logic. But the vectors involved can be quite large, hence the alternative label
Hyperdimensional (HD) computing. Advances in neuromorphic hardware hold the
promise of reducing the running time and energy footprint of neural networks by
orders of magnitude. In this paper, we extend some pioneering work to run VSA
algorithms on a substrate of spiking neurons that could be run efficiently on
neuromorphic hardware.
Related papers
- Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - How can neuromorphic hardware attain brain-like functional capabilities? [0.6345523830122166]
Current neuromorphic hardware employs brain-like spiking neurons instead of standard artificial neurons.
Current architectures and training methods for networks of spiking neurons in NMHW are largely copied from artificial neural networks.
We need to focus on principles that are both easy to implement in NMHW and are likely to support brain-like functionality.
arXiv Detail & Related papers (2023-10-25T08:09:52Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Gluing Neural Networks Symbolically Through Hyperdimensional Computing [8.209945970790741]
We explore the notion of using binary hypervectors to encode the final, classifying output signals of neural networks.
This allows multiple neural networks to work together to solve a problem, with little additional overhead.
We find that this outperforms the state of the art, or is on a par with it, while using very little overhead.
arXiv Detail & Related papers (2022-05-31T04:44:02Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive
with Supervised Learning in Standard Spiking Neural Networks [0.0]
A view in theoretical neuroscience sees the brain as a function-computing device.
Being able to approximate functions is a fundamental axiom to build upon for future brain research.
In this work we apply a novel supervised learning algorithm - based on controlling niobium-doped strontium titanate memristive synapses - to learning non-trivial multidimensional functions.
arXiv Detail & Related papers (2022-04-12T10:21:22Z) - Exposing Hardware Building Blocks to Machine Learning Frameworks [4.56877715768796]
We focus on how to design topologies that complement such a view of neurons as unique functions.
We develop a library that supports training a neural network with custom sparsity and quantization.
arXiv Detail & Related papers (2020-04-10T14:26:00Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.