LeHDC: Learning-Based Hyperdimensional Computing Classifier
- URL: http://arxiv.org/abs/2203.09680v1
- Date: Fri, 18 Mar 2022 01:13:58 GMT
- Title: LeHDC: Learning-Based Hyperdimensional Computing Classifier
- Authors: Shijin Duan, Yejia Liu, Shaolei Ren, and Xiaolin Xu
- Abstract summary: We propose a new HDC framework, called LeHDC, which leverages a principled learning approach to improve the model accuracy.
Experimental validation shows that LeHDC outperforms previous HDC training strategies and can improve on average the inference accuracy over 15%.
- Score: 14.641707790969914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to the tiny storage and efficient execution, hyperdimensional
Computing (HDC) is emerging as a lightweight learning framework on
resource-constrained hardware. Nonetheless, the existing HDC training relies on
various heuristic methods, significantly limiting their inference accuracy. In
this paper, we propose a new HDC framework, called LeHDC, which leverages a
principled learning approach to improve the model accuracy. Concretely, LeHDC
maps the existing HDC framework into an equivalent Binary Neural Network
architecture, and employs a corresponding training strategy to minimize the
training loss. Experimental validation shows that LeHDC outperforms previous
HDC training strategies and can improve on average the inference accuracy over
15% compared to the baseline HDC.
Related papers
- ICL-TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models [103.45785408116146]
Continual learning (CL) aims to train a model that can solve multiple tasks presented sequentially.
Recent CL approaches have achieved strong performance by leveraging large pre-trained models that generalize well to downstream tasks.
However, such methods lack theoretical guarantees, making them prone to unexpected failures.
We bridge this gap by integrating an empirically strong approach into a principled framework, designed to prevent forgetting.
arXiv Detail & Related papers (2024-10-01T12:58:37Z) - MicroHD: An Accuracy-Driven Optimization of Hyperdimensional Computing Algorithms for TinyML systems [8.54897708375791]
Hyperdimensional computing (HDC) is emerging as a promising AI approach that can effectively target TinyML applications.
Previous works on HDC showed that limiting the standard 10k dimensions of the hyperdimensional space to much lower values is possible.
arXiv Detail & Related papers (2024-03-24T02:45:34Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Online Hyperparameter Optimization for Class-Incremental Learning [99.70569355681174]
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge.
We propose an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori.
arXiv Detail & Related papers (2023-01-11T17:58:51Z) - Hyperdimensional Computing vs. Neural Networks: Comparing Architecture
and Learning Process [3.244375684001034]
We make a comparative study between HDC and neural network to provide a different angle where HDC can be derived from an extremely compact neural network trained upfront.
Experimental results show such neural network-derived HDC model can achieve up to 21% and 5% accuracy increase from conventional and learning-based HDC models respectively.
arXiv Detail & Related papers (2022-07-24T21:23:50Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - Knowledge Distillation as Efficient Pre-training: Faster Convergence,
Higher Data-efficiency, and Better Transferability [53.27240222619834]
Knowledge Distillation as Efficient Pre-training aims to efficiently transfer the learned feature representation from pre-trained models to new student models for future downstream tasks.
Our method performs comparably with supervised pre-training counterparts in 3 downstream tasks and 9 downstream datasets requiring 10x less data and 5x less pre-training time.
arXiv Detail & Related papers (2022-03-10T06:23:41Z) - A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices [17.976792694929063]
We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
arXiv Detail & Related papers (2022-03-09T17:20:12Z) - AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural
Networks [78.62086125399831]
We present a general approach called Alternating Compressed/DeCompressed (AC/DC) training of deep neural networks (DNNs)
AC/DC outperforms existing sparse training methods in accuracy at similar computational budgets.
An important property of AC/DC is that it allows co-training of dense and sparse models, yielding accurate sparse-dense model pairs at the end of the training process.
arXiv Detail & Related papers (2021-06-23T13:23:00Z) - HDXplore: Automated Blackbox Testing of Brain-Inspired Hyperdimensional
Computing [2.3549478726261883]
HDC is an emerging computing scheme based on the working mechanism of brain that computes with deep and abstract patterns of neural activity instead of actual numbers.
Compared with traditional ML algorithms such as DNN, HDC is more memory-centric, granting it advantages such as relatively smaller model size, less cost, and one-shot learning.
We develop HDXplore, a blackbox differential testing-based framework to expose the unexpected or incorrect behaviors of HDC models.
arXiv Detail & Related papers (2021-05-26T18:08:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.