Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures
- URL: http://arxiv.org/abs/2311.10277v1
- Date: Fri, 17 Nov 2023 01:48:07 GMT
- Title: Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures
- Authors: Sercan Aygun, M. Hassan Najafi
- Abstract summary: Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
- Score: 2.022279594514036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional computing (HDC) is an emerging computing paradigm with
significant promise for efficient and robust learning. In HDC, objects are
encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence,
directly impacts the performance of HDC systems. Despite a large body of work
on the processing parts of HDC systems, little to no attention has been paid to
data encoding and the quality of hypervectors. Most prior studies have
generated hypervectors using inherent random functions, such as MATLAB`s or
Python`s random function. This work introduces an optimization technique for
generating hypervectors by employing quasi-random sequences. These sequences
have recently demonstrated their effectiveness in achieving accurate and
low-discrepancy data encoding in stochastic computing systems. The study
outlines the optimization steps for utilizing Sobol sequences to produce
high-quality hypervectors in HDC systems. An optimization algorithm is proposed
to select the most suitable Sobol sequences for generating minimally correlated
hypervectors, particularly in applications related to symbol-oriented
architectures. The performance of the proposed technique is evaluated in
comparison to two traditional approaches of generating hypervectors based on
linear-feedback shift registers and MATLAB random function. The evaluation is
conducted for two applications: (i) language and (ii) headline classification.
Our experimental results demonstrate accuracy improvements of up to 10.79%,
depending on the vector size. Additionally, the proposed encoding hardware
exhibits reduced energy consumption and a superior area-delay product.
Related papers
- LLM-Vectorizer: LLM-based Verified Loop Vectorizer [12.048697450464935]
Large-language models (LLMs) can generate vectorized code from scalar programs that process individual array elements.
LLMs are capable of producing high performance vectorized code with run-time speedup ranging from 1.1x to 9.4x.
Our approach is able to verify 38.2% of vectorizations as correct on the TSVC benchmark dataset.
arXiv Detail & Related papers (2024-06-07T07:04:26Z) - uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing [1.7118124088316602]
Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
arXiv Detail & Related papers (2023-11-16T06:28:19Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Learning from Hypervectors: A Survey on Hypervector Encoding [9.46717806608802]
Hyperdimensional computing (HDC) is an emerging computing paradigm that imitates the brain's structure to offer a powerful and efficient processing and learning model.
In HDC, the data are encoded with long vectors, called hypervectors, typically with a length of 1K to 10K.
arXiv Detail & Related papers (2023-08-01T17:42:35Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Understanding Hyperdimensional Computing for Parallel Single-Pass
Learning [47.82940409267635]
We show that HDC can outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
We propose a new class of VSAs, finite group VSAs, which surpass the limits of HDC.
Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6%.
arXiv Detail & Related papers (2022-02-10T02:38:56Z) - Shift-Equivariant Similarity-Preserving Hypervector Representations of
Sequences [0.8223798883838329]
We propose an approach for the formation of hypervectors of sequences.
Our methods represent the sequence elements by compositional hypervectors.
We experimentally explored the proposed representations using a diverse set of tasks with data in the form of symbolic strings.
arXiv Detail & Related papers (2021-12-31T14:29:12Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation [7.528764144503429]
We propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
SHEARer achieves an average throughput boost of 104,904x (15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art encoding methods.
We also develop a software framework that enables training HD models by emulating the proposed approximate encodings.
arXiv Detail & Related papers (2020-07-20T07:58:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.