HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design
Space Exploration
- URL: http://arxiv.org/abs/2206.04746v1
- Date: Thu, 9 Jun 2022 19:46:08 GMT
- Title: HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design
Space Exploration
- Authors: William Andrew Simon, Una Pale, Tomas Teijeiro, David Atienza
- Abstract summary: We introduce HDTorch, an open-source, PyTorch-based HDC library with extensions for hypervector operations.
We analyze four HDC benchmark datasets in terms of accuracy, runtime, and memory consumption.
We perform the first-ever HD training and inference analysis of the entirety of the CHB-MIT EEG epilepsy database.
- Score: 4.783565770657063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: HyperDimensional Computing (HDC) as a machine learning paradigm is highly
interesting for applications involving continuous, semi-supervised learning for
long-term monitoring. However, its accuracy is not yet on par with other
Machine Learning (ML) approaches. Frameworks enabling fast design space
exploration to find practical algorithms are necessary to make HD computing
competitive with other ML techniques. To this end, we introduce HDTorch, an
open-source, PyTorch-based HDC library with CUDA extensions for hypervector
operations. We demonstrate HDTorch's utility by analyzing four HDC benchmark
datasets in terms of accuracy, runtime, and memory consumption, utilizing both
classical and online HD training methodologies. We demonstrate average
(training)/inference speedups of (111x/68x)/87x for classical/online HD,
respectively. Moreover, we analyze the effects of varying hyperparameters on
runtime and accuracy. Finally, we demonstrate how HDTorch enables exploration
of HDC strategies applied to large, real-world datasets. We perform the
first-ever HD training and inference analysis of the entirety of the CHB-MIT
EEG epilepsy database. Results show that the typical approach of training on a
subset of the data does not necessarily generalize to the entire dataset, an
important factor when developing future HD models for medical wearable devices.
Related papers
- Enabling High Data Throughput Reinforcement Learning on GPUs: A Domain Agnostic Framework for Data-Driven Scientific Research [90.91438597133211]
We introduce WarpSci, a framework designed to overcome crucial system bottlenecks in the application of reinforcement learning.
We eliminate the need for data transfer between the CPU and GPU, enabling the concurrent execution of thousands of simulations.
arXiv Detail & Related papers (2024-08-01T21:38:09Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - HDCC: A Hyperdimensional Computing compiler for classification on
embedded systems and high-performance computing [58.720142291102135]
This work introduces the name compiler, the first open-source compiler that translates high-level descriptions of HDC classification methods into optimized C code.
name is designed like a modern compiler, featuring an intuitive and descriptive input language, an intermediate representation (IR), and a retargetable backend.
To substantiate these claims, we conducted experiments with HDCC on several of the most popular datasets in the HDC literature.
arXiv Detail & Related papers (2023-04-24T19:16:03Z) - Streaming Encoding Algorithms for Scalable Hyperdimensional Computing [12.829102171258882]
Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.
In this work, we explore a family of streaming encoding techniques based on hashing.
We show formally that these methods enjoy comparable guarantees on performance for learning applications while being substantially more efficient than existing alternatives.
arXiv Detail & Related papers (2022-09-20T17:25:14Z) - An Extension to Basis-Hypervectors for Learning from Circular Data in
Hyperdimensional Computing [62.997667081978825]
Hyperdimensional Computing (HDC) is a computation framework based on properties of high-dimensional random spaces.
We present a study on basis-hypervector sets, which leads to practical contributions to HDC in general.
We introduce a method to learn from circular data, an important type of information never before addressed in machine learning with HDC.
arXiv Detail & Related papers (2022-05-16T18:04:55Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - HDC-MiniROCKET: Explicit Time Encoding in Time Series Classification
with Hyperdimensional Computing [14.82489178857542]
MiniROCKET is one of the best existing methods for time series classification.
We extend this approach to provide better global temporal encodings using hyperdimensional computing (HDC) mechanisms.
The extension with HDC can achieve considerably better results on datasets with high temporal dependence without increasing the computational effort for inference.
arXiv Detail & Related papers (2022-02-16T13:33:13Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - SHEARer: Highly-Efficient Hyperdimensional Computing by
Software-Hardware Enabled Multifold Approximation [7.528764144503429]
We propose SHEARer, an algorithm-hardware co-optimization to improve the performance and energy consumption of HD computing.
SHEARer achieves an average throughput boost of 104,904x (15.7x) and energy savings of up to 56,044x (301x) compared to state-of-the-art encoding methods.
We also develop a software framework that enables training HD models by emulating the proposed approximate encodings.
arXiv Detail & Related papers (2020-07-20T07:58:44Z) - Classification using Hyperdimensional Computing: A Review [16.329917143918028]
This paper introduces the background of HD computing, and reviews the data representation, data transformation, and similarity measurement.
Evaluations indicate that HD computing shows great potential in addressing problems using data in the form of letters, signals and images.
arXiv Detail & Related papers (2020-04-19T23:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.