A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices
- URL: http://arxiv.org/abs/2203.04894v1
- Date: Wed, 9 Mar 2022 17:20:12 GMT
- Title: A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices
- Authors: Shijin Duan, Xiaolin Xu and Shaolei Ren
- Abstract summary: We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
- Score: 17.976792694929063
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: By mimicking brain-like cognition and exploiting parallelism,
hyperdimensional computing (HDC) classifiers have been emerging as a
lightweight framework to achieve efficient on-device inference. Nonetheless,
they have two fundamental drawbacks, heuristic training process and ultra-high
dimension, which result in sub-optimal inference accuracy and large model sizes
beyond the capability of tiny devices with stringent resource constraints. In
this paper, we address these fundamental drawbacks and propose a
low-dimensional computing (LDC) alternative. Specifically, by mapping our LDC
classifier into an equivalent neural network, we optimize our model using a
principled training approach. Most importantly, we can improve the inference
accuracy while successfully reducing the ultra-high dimension of existing HDC
models by orders of magnitude (e.g., 8000 vs. 4/64). We run experiments to
evaluate our LDC classifier by considering different datasets for inference on
tiny devices, and also implement different models on an FPGA platform for
acceleration. The results highlight that our LDC classifier offers an
overwhelming advantage over the existing brain-inspired HDC models and is
particularly suitable for inference on tiny devices.
Related papers
- Adaptable Embeddings Network (AEN) [49.1574468325115]
We introduce Adaptable Embeddings Networks (AEN), a novel dual-encoder architecture using Kernel Density Estimation (KDE)
AEN allows for runtime adaptation of classification criteria without retraining and is non-autoregressive.
The architecture's ability to preprocess and cache condition embeddings makes it ideal for edge computing applications and real-time monitoring systems.
arXiv Detail & Related papers (2024-11-21T02:15:52Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Scheduled Knowledge Acquisition on Lightweight Vector Symbolic Architectures for Brain-Computer Interfaces [18.75591257735207]
Classical feature engineering is computationally efficient but has low accuracy, whereas the recent neural networks (DNNs) improve accuracy but are computationally expensive and incur high latency.
As a promising alternative, the low-dimensional computing (LDC) classifier based on vector symbolic architecture (VSA), achieves small model size yet higher accuracy than classical feature engineering methods.
arXiv Detail & Related papers (2024-03-18T01:06:29Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - An Efficient and Scalable Collection of Fly-inspired Voting Units for
Visual Place Recognition in Changing Environments [20.485491385050615]
Low-overhead VPR techniques would enable platforms equipped with low-end, cheap hardware.
Our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations.
arXiv Detail & Related papers (2021-09-22T19:01:20Z) - Memory and Computation-Efficient Kernel SVM via Binary Embedding and
Ternary Model Coefficients [18.52747917850984]
Kernel approximation is widely used to scale up kernel SVM training and prediction.
Memory and computation costs of kernel approximation models are still too high if we want to deploy them on memory-limited devices.
We propose a novel memory and computation-efficient kernel SVM model by using both binary embedding and binary model coefficients.
arXiv Detail & Related papers (2020-10-06T09:41:54Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.