Robust and Scalable Hyperdimensional Computing With Brain-Like Neural
Adaptations
- URL: http://arxiv.org/abs/2311.07705v1
- Date: Mon, 13 Nov 2023 19:42:33 GMT
- Title: Robust and Scalable Hyperdimensional Computing With Brain-Like Neural
Adaptations
- Authors: Junyao Wang, Mohammad Abdullah Al Faruque
- Abstract summary: Internet of Things (IoT) has facilitated many applications utilizing edge-based machine learning (ML) methods to analyze locally collected data.
Brain-inspired hyperdimensional computing (HDC) has been introduced to address this issue.
Existing HDCs use static encoders, requiring extremely high dimensionality and hundreds of training iterations to achieve reasonable accuracy.
We present dynamic HDC learning frameworks that identify and regenerate undesired dimensions to provide adequate accuracy with significantly lowered dimensionalities.
- Score: 17.052624039805856
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The Internet of Things (IoT) has facilitated many applications utilizing
edge-based machine learning (ML) methods to analyze locally collected data.
Unfortunately, popular ML algorithms often require intensive computations
beyond the capabilities of today's IoT devices. Brain-inspired hyperdimensional
computing (HDC) has been introduced to address this issue. However, existing
HDCs use static encoders, requiring extremely high dimensionality and hundreds
of training iterations to achieve reasonable accuracy. This results in a huge
efficiency loss, severely impeding the application of HDCs in IoT systems. We
observed that a main cause is that the encoding module of existing HDCs lacks
the capability to utilize and adapt to information learned during training. In
contrast, neurons in human brains dynamically regenerate all the time and
provide more useful functionalities when learning new information. While the
goal of HDC is to exploit the high-dimensionality of randomly generated base
hypervectors to represent the information as a pattern of neural activity, it
remains challenging for existing HDCs to support a similar behavior as brain
neural regeneration. In this work, we present dynamic HDC learning frameworks
that identify and regenerate undesired dimensions to provide adequate accuracy
with significantly lowered dimensionalities, thereby accelerating both the
training and inference.
Related papers
- Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Dynamic Early Exiting Predictive Coding Neural Networks [3.542013483233133]
With the urge for smaller and more accurate devices, Deep Learning models became too heavy to deploy.
We propose a shallow bidirectional network based on predictive coding theory and dynamic early exiting for halting further computations.
We achieve comparable accuracy to VGG-16 in image classification on CIFAR-10 with fewer parameters and less computational complexity.
arXiv Detail & Related papers (2023-09-05T08:00:01Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - To Spike or Not To Spike: A Digital Hardware Perspective on Deep
Learning Acceleration [4.712922151067433]
As deep learning models scale, they become increasingly competitive from domains spanning from computer vision to natural language processing.
The power efficiency of the biological brain outperforms any large-scale deep learning ( DL ) model.
Neuromorphic computing tries to mimic the brain operations to improve the efficiency of DL models.
arXiv Detail & Related papers (2023-06-27T19:04:00Z) - Late Breaking Results: Scalable and Efficient Hyperdimensional Computing
for Network Intrusion Detection [8.580557246382142]
CyberHD is an innovative HDC learning framework that identifies and regenerates insignificant dimensions to capture complicated patterns of cyber threats with remarkably lower dimensionality.
Furthermore, the holographic distribution of patterns in high dimensional space provides CyberHD with notably high robustness against hardware errors.
arXiv Detail & Related papers (2023-04-11T21:30:24Z) - DistHD: A Learner-Aware Dynamic Encoding Method for Hyperdimensional
Classification [10.535034643999344]
We propose DistHD, a novel dynamic encoding technique for HDC adaptive learning.
Our proposed algorithm DistHD successfully achieves the desired accuracy with considerably lower dimensionality.
arXiv Detail & Related papers (2023-04-11T21:18:52Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Spiking Hyperdimensional Network: Neuromorphic Models Integrated with
Memory-Inspired Framework [8.910420030964172]
We propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing.
SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data.
Our evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture.
arXiv Detail & Related papers (2021-10-01T05:01:21Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.