uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing
- URL: http://arxiv.org/abs/2311.10778v1
- Date: Thu, 16 Nov 2023 06:28:19 GMT
- Title: uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing
- Authors: Sercan Aygun, Mehran Shoushtari Moghadam, M. Hassan Najafi
- Abstract summary: Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
- Score: 1.7118124088316602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional computing (HDC) is a novel computational paradigm that
operates on long-dimensional vectors known as hypervectors. The hypervectors
are constructed as long bit-streams and form the basic building blocks of HDC
systems. In HDC, hypervectors are generated from scalar values without taking
their bit significance into consideration. HDC has been shown to be efficient
and robust in various data processing applications, including computer vision
tasks. To construct HDC models for vision applications, the current
state-of-the-art practice utilizes two parameters for data encoding: pixel
intensity and pixel position. However, the intensity and position information
embedded in high-dimensional vectors are generally not generated dynamically in
the HDC models. Consequently, the optimal design of hypervectors with high
model accuracy requires powerful computing platforms for training. A more
efficient approach to generating hypervectors is to create them dynamically
during the training phase, which results in accurate, low-cost, and highly
performable vectors. To this aim, we use low-discrepancy sequences to generate
intensity hypervectors only, while avoiding position hypervectors. By doing so,
the multiplication step in vector encoding is eliminated, resulting in a
power-efficient HDC system. For the first time in the literature, our proposed
approach employs lightweight vector generators utilizing unary bit-streams for
efficient encoding of data instead of using conventional comparator-based
generators.
Related papers
- Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures [2.022279594514036]
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
arXiv Detail & Related papers (2023-11-17T01:48:07Z) - Learning from Hypervectors: A Survey on Hypervector Encoding [9.46717806608802]
Hyperdimensional computing (HDC) is an emerging computing paradigm that imitates the brain's structure to offer a powerful and efficient processing and learning model.
In HDC, the data are encoded with long vectors, called hypervectors, typically with a length of 1K to 10K.
arXiv Detail & Related papers (2023-08-01T17:42:35Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - HDCC: A Hyperdimensional Computing compiler for classification on
embedded systems and high-performance computing [58.720142291102135]
This work introduces the name compiler, the first open-source compiler that translates high-level descriptions of HDC classification methods into optimized C code.
name is designed like a modern compiler, featuring an intuitive and descriptive input language, an intermediate representation (IR), and a retargetable backend.
To substantiate these claims, we conducted experiments with HDCC on several of the most popular datasets in the HDC literature.
arXiv Detail & Related papers (2023-04-24T19:16:03Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning [47.82940409267635]
We show that HDC can outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC.
Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6%.
arXiv Detail & Related papers (2022-02-10T02:38:56Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Hypervector Design for Efficient Hyperdimensional Computing on Edge
Devices [0.20971479389679334]
This paper presents a technique to minimize the hypervector dimension while maintaining the accuracy and improving the robustness of the classifier.
The proposed approach decreases the hypervector dimension by more than $32times$ while maintaining or increasing the accuracy achieved by conventional HDC.
Experiments on a commercial hardware platform show that the proposed approach achieves more than one order of magnitude reduction in model size, inference time, and energy consumption.
arXiv Detail & Related papers (2021-03-08T05:25:45Z) - SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation [7.528764144503429]
We propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
SHEARer achieves an average throughput boost of 104,904x (15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art encoding methods.
We also develop a software framework that enables training HD models by emulating the proposed approximate encodings.
arXiv Detail & Related papers (2020-07-20T07:58:44Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z) - Classification using Hyperdimensional Computing: A Review [16.329917143918028]
This paper introduces the background of HD computing, and reviews the data representation, data transformation, and similarity measurement.
Evaluations indicate that HD computing shows great potential in addressing problems using data in the form of letters, signals and images.
arXiv Detail & Related papers (2020-04-19T23:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.