Hypervector Design for Efficient Hyperdimensional Computing on Edge
Devices
- URL: http://arxiv.org/abs/2103.06709v1
- Date: Mon, 8 Mar 2021 05:25:45 GMT
- Title: Hypervector Design for Efficient Hyperdimensional Computing on Edge
Devices
- Authors: Toygun Basaklar, Yigit Tuncel, Shruti Yadav Narayana, Suat Gumussoy,
and Umit Y. Ogras
- Abstract summary: This paper presents a technique to minimize the hypervector dimension while maintaining the accuracy and improving the robustness of the classifier.
The proposed approach decreases the hypervector dimension by more than $32times$ while maintaining or increasing the accuracy achieved by conventional HDC.
Experiments on a commercial hardware platform show that the proposed approach achieves more than one order of magnitude reduction in model size, inference time, and energy consumption.
- Score: 0.20971479389679334
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Hyperdimensional computing (HDC) has emerged as a new light-weight learning
algorithm with smaller computation and energy requirements compared to
conventional techniques. In HDC, data points are represented by
high-dimensional vectors (hypervectors), which are mapped to high-dimensional
space (hyperspace). Typically, a large hypervector dimension ($\geq1000$) is
required to achieve accuracies comparable to conventional alternatives.
However, unnecessarily large hypervectors increase hardware and energy costs,
which can undermine their benefits. This paper presents a technique to minimize
the hypervector dimension while maintaining the accuracy and improving the
robustness of the classifier. To this end, we formulate the hypervector design
as a multi-objective optimization problem for the first time in the literature.
The proposed approach decreases the hypervector dimension by more than
$32\times$ while maintaining or increasing the accuracy achieved by
conventional HDC. Experiments on a commercial hardware platform show that the
proposed approach achieves more than one order of magnitude reduction in model
size, inference time, and energy consumption. We also demonstrate the trade-off
between accuracy and robustness to noise and provide Pareto front solutions as
a design parameter in our hypervector design.
Related papers
- Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures [2.022279594514036]
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
arXiv Detail & Related papers (2023-11-17T01:48:07Z) - uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing [1.7118124088316602]
Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
arXiv Detail & Related papers (2023-11-16T06:28:19Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Efficient Hyperdimensional Computing [4.8915861089531205]
We develop HDC models that use binary hypervectors with dimensions orders of magnitude lower than those of state-of-the-art HDC models.
For instance, on the MNIST dataset, we achieve 91.12% HDC accuracy in image classification with a dimension of only 64.
arXiv Detail & Related papers (2023-01-26T02:22:46Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning [47.82940409267635]
We show that HDC can outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC.
Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6%.
arXiv Detail & Related papers (2022-02-10T02:38:56Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation [7.528764144503429]
We propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
SHEARer achieves an average throughput boost of 104,904x (15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art encoding methods.
We also develop a software framework that enables training HD models by emulating the proposed approximate encodings.
arXiv Detail & Related papers (2020-07-20T07:58:44Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.