HEAL: Brain-inspired Hyperdimensional Efficient Active Learning
- URL: http://arxiv.org/abs/2402.11223v1
- Date: Sat, 17 Feb 2024 08:41:37 GMT
- Title: HEAL: Brain-inspired Hyperdimensional Efficient Active Learning
- Authors: Yang Ni, Zhuowen Zou, Wenjun Huang, Hanning Chen, William Youngwoo
Chung, Samuel Cho, Ranganath Krishnan, Pietro Mercati, Mohsen Imani
- Abstract summary: We introduce Hyperdimensional Efficient Active Learning (HEAL), a novel Active Learning framework tailored for HDC classification.
HEAL proactively annotates unlabeled data points via uncertainty and diversity-guided acquisition, leading to a more efficient dataset annotation and lowering labor costs.
Our evaluation shows that HEAL surpasses a diverse set of baselines in AL quality and achieves notably faster acquisition than many BNN-powered or diversity-guided AL methods.
- Score: 13.648600396116539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drawing inspiration from the outstanding learning capability of our human
brains, Hyperdimensional Computing (HDC) emerges as a novel computing paradigm,
and it leverages high-dimensional vector presentation and operations for
brain-like lightweight Machine Learning (ML). Practical deployments of HDC have
significantly enhanced the learning efficiency compared to current deep ML
methods on a broad spectrum of applications. However, boosting the data
efficiency of HDC classifiers in supervised learning remains an open question.
In this paper, we introduce Hyperdimensional Efficient Active Learning (HEAL),
a novel Active Learning (AL) framework tailored for HDC classification. HEAL
proactively annotates unlabeled data points via uncertainty and
diversity-guided acquisition, leading to a more efficient dataset annotation
and lowering labor costs. Unlike conventional AL methods that only support
classifiers built upon deep neural networks (DNN), HEAL operates without the
need for gradient or probabilistic computations. This allows it to be
effortlessly integrated with any existing HDC classifier architecture. The key
design of HEAL is a novel approach for uncertainty estimation in HDC
classifiers through a lightweight HDC ensemble with prior hypervectors.
Additionally, by exploiting hypervectors as prototypes (i.e., compact
representations), we develop an extra metric for HEAL to select diverse samples
within each batch for annotation. Our evaluation shows that HEAL surpasses a
diverse set of baselines in AL quality and achieves notably faster acquisition
than many BNN-powered or diversity-guided AL methods, recording 11 times to
40,000 times speedup in acquisition runtime per batch.
Related papers
- Evolutionary Optimization of 1D-CNN for Non-contact Respiration Pattern Classification [0.19999259391104385]
We present a deep learning-based approach for time-series respiration data classification.
We employed a 1D convolutional neural network (1D-CNN) for classification purposes.
Genetic algorithm was employed to optimize the 1D-CNN architecture to maximize classification accuracy.
arXiv Detail & Related papers (2023-12-20T13:59:43Z) - Robust and Scalable Hyperdimensional Computing With Brain-Like Neural
Adaptations [17.052624039805856]
Internet of Things (IoT) has facilitated many applications utilizing edge-based machine learning (ML) methods to analyze locally collected data.
Brain-inspired hyperdimensional computing (HDC) has been introduced to address this issue.
Existing HDCs use static encoders, requiring extremely high dimensionality and hundreds of training iterations to achieve reasonable accuracy.
We present dynamic HDC learning frameworks that identify and regenerate undesired dimensions to provide adequate accuracy with significantly lowered dimensionalities.
arXiv Detail & Related papers (2023-11-13T19:42:33Z) - Efficient Adaptive Human-Object Interaction Detection with
Concept-guided Memory [64.11870454160614]
We propose an efficient Adaptive HOI Detector with Concept-guided Memory (ADA-CM)
ADA-CM has two operating modes. The first mode makes it tunable without learning new parameters in a training-free paradigm.
Our proposed method achieves competitive results with state-of-the-art on the HICO-DET and V-COCO datasets with much less training time.
arXiv Detail & Related papers (2023-09-07T13:10:06Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Active Learning Guided by Efficient Surrogate Learners [25.52920030051264]
Re-training a deep learning model each time a single data point receives a new label is impractical.
We introduce a new active learning algorithm that harnesses the power of a Gaussian process surrogate in conjunction with the neural network principal learner.
Our proposed model adeptly updates the surrogate learner for every new data instance, enabling it to emulate and capitalize on the continuous learning dynamics of the neural network.
arXiv Detail & Related papers (2023-01-07T01:35:25Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices [17.976792694929063]
We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
arXiv Detail & Related papers (2022-03-09T17:20:12Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.