Hyperdimensional Computing vs. Neural Networks: Comparing Architecture
and Learning Process
- URL: http://arxiv.org/abs/2207.12932v1
- Date: Sun, 24 Jul 2022 21:23:50 GMT
- Title: Hyperdimensional Computing vs. Neural Networks: Comparing Architecture
and Learning Process
- Authors: Dongning Ma and Xun Jiao
- Abstract summary: We make a comparative study between HDC and neural network to provide a different angle where HDC can be derived from an extremely compact neural network trained upfront.
Experimental results show such neural network-derived HDC model can achieve up to 21% and 5% accuracy increase from conventional and learning-based HDC models respectively.
- Score: 3.244375684001034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional Computing (HDC) has obtained abundant attention as an
emerging non von Neumann computing paradigm. Inspired by the way human brain
functions, HDC leverages high dimensional patterns to perform learning tasks.
Compared to neural networks, HDC has shown advantages such as energy efficiency
and smaller model size, but sub-par learning capabilities in sophisticated
applications. Recently, researchers have observed when combined with neural
network components, HDC can achieve better performance than conventional HDC
models. This motivates us to explore the deeper insights behind theoretical
foundations of HDC, particularly the connection and differences with neural
networks. In this paper, we make a comparative study between HDC and neural
network to provide a different angle where HDC can be derived from an extremely
compact neural network trained upfront. Experimental results show such neural
network-derived HDC model can achieve up to 21% and 5% accuracy increase from
conventional and learning-based HDC models respectively. This paper aims to
provide more insights and shed lights on future directions for researches on
this popular emerging learning scheme.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - CQural: A Novel CNN based Hybrid Architecture for Quantum Continual
Machine Learning [0.0]
We show that it is possible to circumvent catastrophic forgetting in continual learning with novel hybrid classical-quantum neural networks.
We also claim that if the model is trained with these explanations, it tends to give better performance and learn specific features that are far from the decision boundary.
arXiv Detail & Related papers (2023-05-16T18:19:12Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - Automated Architecture Search for Brain-inspired Hyperdimensional
Computing [5.489173080636452]
This paper represents the first effort to explore an automated architecture search for hyperdimensional computing (HDC)
The searched HDC architectures show competitive performance on case studies involving a drug discovery dataset and a language recognition task.
arXiv Detail & Related papers (2022-02-11T18:43:36Z) - Spiking Hyperdimensional Network: Neuromorphic Models Integrated with
Memory-Inspired Framework [8.910420030964172]
We propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing.
SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data.
Our evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture.
arXiv Detail & Related papers (2021-10-01T05:01:21Z) - Training Convolutional Neural Networks With Hebbian Principal Component
Analysis [10.026753669198108]
Hebbian learning can be used for training the lower or the higher layers of a neural network.
We use a nonlinear Hebbian Principal Component Analysis ( HPCA) learning rule, in place of the Hebbian Winner Takes All (HWTA) strategy.
In particular, the HPCA rule is used to train Convolutional Neural Networks in order to extract relevant features from the CIFAR-10 image dataset.
arXiv Detail & Related papers (2020-12-22T18:17:46Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.