SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation
- URL: http://arxiv.org/abs/2007.10330v1
- Date: Mon, 20 Jul 2020 07:58:44 GMT
- Title: SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation
- Authors: Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad,
Yeseong Kim, and Tajana Rosing
- Abstract summary: We propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
SHEARer achieves an average throughput boost of 104,904x (15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art encoding methods.
We also develop a software framework that enables training HD models by emulating the proposed approximate encodings.
- Score: 7.528764144503429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperdimensional computing (HD) is an emerging paradigm for machine learning
based on the evidence that the brain computes on high-dimensional, distributed,
representations of data. The main operation of HD is encoding, which transfers
the input data to hyperspace by mapping each input feature to a hypervector,
accompanied by so-called bundling procedure that simply adds up the
hypervectors to realize encoding hypervector. Although the operations of HD are
highly parallelizable, the massive number of operations hampers the efficiency
of HD in embedded domain. In this paper, we propose SHEARer, an
algorithm-hardware co-optimization to improve the performance and energy
consumption of HD computing. We gain insight from a prudent scheme of
approximating the hypervectors that, thanks to inherent error resiliency of HD,
has minimal impact on accuracy while provides high prospect for hardware
optimization. In contrast to previous works that generate the encoding
hypervectors in full precision and then ex-post quantizing, we compute the
encoding hypervectors in an approximate manner that saves a significant amount
of resources yet affords high accuracy. We also propose a novel FPGA
implementation that achieves striking performance through massive parallelism
with low power consumption. Moreover, we develop a software framework that
enables training HD models by emulating the proposed approximate encodings. The
FPGA implementation of SHEARer achieves an average throughput boost of 104,904x
(15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art
encoding methods implemented on Raspberry Pi 3 (GeForce GTX 1080 Ti) using
practical machine learning datasets.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - An Encoding Framework for Binarized Images using HyperDimensional
Computing [0.0]
This article proposes a novel light-weight approach to encode binarized images that preserves similarity of patterns at nearby locations.
The method reaches an accuracy of 97.35% on the test set for the MNIST data set and 84.12% for the Fashion-MNIST data set.
arXiv Detail & Related papers (2023-12-01T09:34:28Z) - Sobol Sequence Optimization for Hardware-Efficient Vector Symbolic
Architectures [2.022279594514036]
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning.
objects are encoded with high-dimensional vector symbolic sequences called hypervectors.
The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems.
arXiv Detail & Related papers (2023-11-17T01:48:07Z) - uHD: Unary Processing for Lightweight and Dynamic Hyperdimensional
Computing [1.7118124088316602]
Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors.
In this paper, we show how to generate intensity and position hypervectors in HDC using low-discrepancy sequences.
For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data.
arXiv Detail & Related papers (2023-11-16T06:28:19Z) - Pruning random resistive memory for optimizing analogue AI [54.21621702814583]
AI models present unprecedented challenges to energy consumption and environmental sustainability.
One promising solution is to revisit analogue computing, a technique that predates digital computing.
Here, we report a universal solution, software-hardware co-design using structural plasticity-inspired edge pruning.
arXiv Detail & Related papers (2023-11-13T08:59:01Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - HDCC: A Hyperdimensional Computing compiler for classification on
embedded systems and high-performance computing [58.720142291102135]
This work introduces the name compiler, the first open-source compiler that translates high-level descriptions of HDC classification methods into optimized C code.
name is designed like a modern compiler, featuring an intuitive and descriptive input language, an intermediate representation (IR), and a retargetable backend.
To substantiate these claims, we conducted experiments with HDCC on several of the most popular datasets in the HDC literature.
arXiv Detail & Related papers (2023-04-24T19:16:03Z) - HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design
Space Exploration [4.783565770657063]
We introduce HDTorch, an open-source, PyTorch-based HDC library with extensions for hypervector operations.
We analyze four HDC benchmark datasets in terms of accuracy, runtime, and memory consumption.
We perform the first-ever HD training and inference analysis of the entirety of the CHB-MIT EEG epilepsy database.
arXiv Detail & Related papers (2022-06-09T19:46:08Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - Classification using Hyperdimensional Computing: A Review [16.329917143918028]
This paper introduces the background of HD computing, and reviews the data representation, data transformation, and similarity measurement.
Evaluations indicate that HD computing shows great potential in addressing problems using data in the form of letters, signals and images.
arXiv Detail & Related papers (2020-04-19T23:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.