Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning
- URL: http://arxiv.org/abs/2202.04805v1
- Date: Thu, 10 Feb 2022 02:38:56 GMT
- Title: Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning
- Authors: Tao Yu, Yichi Zhang, Zhiru Zhang, Christopher De Sa
- Abstract summary: We show that HDC can outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC.
Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6%.
- Score: 47.82940409267635
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Hyperdimensional computing (HDC) is an emerging learning paradigm that
computes with high dimensional binary vectors. It is attractive because of its
energy efficiency and low latency, especially on emerging hardware -- but HDC
suffers from low model accuracy, with little theoretical understanding of what
limits its performance. We propose a new theoretical analysis of the limits of
HDC via a consideration of what similarity matrices can be "expressed" by
binary vectors, and we show how the limits of HDC can be approached using
random Fourier features (RFF). We extend our analysis to the more general class
of vector symbolic architectures (VSA), which compute with high-dimensional
vectors (hypervectors) that are not necessarily binary. We propose a new class
of VSAs, finite group VSAs, which surpass the limits of HDC. Using
representation theory, we characterize which similarity matrices can be
"expressed" by finite group VSA hypervectors, and we show how these VSAs can be
constructed. Experimental results show that our RFF method and group VSA can
both outperform the state-of-the-art HDC model by up to 7.6\% while maintaining
hardware efficiency.
Related papers
- A Walsh Hadamard Derived Linear Vector Symbolic Architecture [83.27945465029167]
Symbolic Vector Architectures (VSAs) are an approach to developing Neuro-symbolic AI.
HLB is designed to have favorable computational efficiency, and efficacy in classic VSA tasks.
arXiv Detail & Related papers (2024-10-30T03:42:59Z) - Compute Better Spent: Replacing Dense Layers with Structured Matrices [77.61728033234233]
We identify more efficient alternatives to dense matrices, as exemplified by the success of convolutional networks in the image domain.
We show that different structures often require drastically different initialization scales and learning rates, which are crucial to performance.
We propose a novel matrix family containing Monarch matrices, the Block-Train, which we show performs better than dense for the same compute on multiple tasks.
arXiv Detail & Related papers (2024-06-10T13:25:43Z) - Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures [2.022279594514036]
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
arXiv Detail & Related papers (2023-11-17T01:48:07Z) - uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing [1.7118124088316602]
Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
arXiv Detail & Related papers (2023-11-16T06:28:19Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - HDCC: A Hyperdimensional Computing compiler for classification on
embedded systems and high-performance computing [58.720142291102135]
This work introduces the name compiler, the first open-source compiler that translates high-level descriptions of HDC classification methods into optimized C code.
name is designed like a modern compiler, featuring an intuitive and descriptive input language, an intermediate representation (IR), and a retargetable backend.
To substantiate these claims, we conducted experiments with HDCC on several of the most popular datasets in the HDC literature.
arXiv Detail & Related papers (2023-04-24T19:16:03Z) - Efficient Hyperdimensional Computing [4.8915861089531205]
We develop HDC models that use binary hypervectors with dimensions orders of magnitude lower than those of state-of-the-art HDC models.
For instance, on the MNIST dataset, we achieve 91.12% HDC accuracy in image classification with a dimension of only 64.
arXiv Detail & Related papers (2023-01-26T02:22:46Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification
with Hyperdimensional Computing [14.82489178857542]
MiniROCKET is one of the best existing methods for time series classification.
We extend this approach to provide better global temporal encodings using hyperdimensional computing (HDC) mechanisms.
The extension with HDC can achieve considerably better results on datasets with high temporal dependence without increasing the computational effort for inference.
arXiv Detail & Related papers (2022-02-16T13:33:13Z) - Hypervector Design for Efficient Hyperdimensional Computing on Edge
Devices [0.20971479389679334]
This paper presents a technique to minimize the hypervector dimension while maintaining the accuracy and improving the robustness of the classifier.
The proposed approach decreases the hypervector dimension by more than $32times$ while maintaining or increasing the accuracy achieved by conventional HDC.
Experiments on a commercial hardware platform show that the proposed approach achieves more than one order of magnitude reduction in model size, inference time, and energy consumption.
arXiv Detail & Related papers (2021-03-08T05:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.