Gluing Neural Networks Symbolically Through Hyperdimensional Computing
- URL: http://arxiv.org/abs/2205.15534v1
- Date: Tue, 31 May 2022 04:44:02 GMT
- Title: Gluing Neural Networks Symbolically Through Hyperdimensional Computing
- Authors: Peter Sutor, Dehao Yuan, Douglas Summers-Stay, Cornelia Fermuller,
Yiannis Aloimonos
- Abstract summary: We explore the notion of using binary hypervectors to encode the final, classifying output signals of neural networks.
This allows multiple neural networks to work together to solve a problem, with little additional overhead.
We find that this outperforms the state of the art, or is on a par with it, while using very little overhead.
- Score: 8.209945970790741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional Computing affords simple, yet powerful operations to create
long Hyperdimensional Vectors (hypervectors) that can efficiently encode
information, be used for learning, and are dynamic enough to be modified on the
fly. In this paper, we explore the notion of using binary hypervectors to
directly encode the final, classifying output signals of neural networks in
order to fuse differing networks together at the symbolic level. This allows
multiple neural networks to work together to solve a problem, with little
additional overhead. Output signals just before classification are encoded as
hypervectors and bundled together through consensus summation to train a
classification hypervector. This process can be performed iteratively and even
on single neural networks by instead making a consensus of multiple
classification hypervectors. We find that this outperforms the state of the
art, or is on a par with it, while using very little overhead, as hypervector
operations are extremely fast and efficient in comparison to the neural
networks. This consensus process can learn online and even grow or lose models
in real time. Hypervectors act as memories that can be stored, and even further
bundled together over time, affording life long learning capabilities.
Additionally, this consensus structure inherits the benefits of
Hyperdimensional Computing, without sacrificing the performance of modern
Machine Learning. This technique can be extrapolated to virtually any neural
model, and requires little modification to employ - one simply requires
recording the output signals of networks when presented with a testing example.
Related papers
- Hyperdimensional Computing with Spiking-Phasor Neurons [0.9594432031144714]
Symbolic Vector Architectures (VSAs) are a powerful framework for representing compositional reasoning.
We run VSA algorithms on a substrate of spiking neurons that could be run efficiently on neuromorphic hardware.
arXiv Detail & Related papers (2023-02-28T20:09:12Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - ItNet: iterative neural networks with small graphs for accurate and
efficient anytime prediction [1.52292571922932]
In this study, we introduce a class of network models that have a small memory footprint in terms of their computational graphs.
We show state-of-the-art results for semantic segmentation on the CamVid and Cityscapes datasets.
arXiv Detail & Related papers (2021-01-21T15:56:29Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.