HEAL: Brain-inspired Hyperdimensional Efficient Active Learning
- URL: http://arxiv.org/abs/2402.11223v1
- Date: Sat, 17 Feb 2024 08:41:37 GMT
- Title: HEAL: Brain-inspired Hyperdimensional Efficient Active Learning
- Authors: Yang Ni, Zhuowen Zou, Wenjun Huang, Hanning Chen, William Youngwoo
Chung, Samuel Cho, Ranganath Krishnan, Pietro Mercati, Mohsen Imani
- Abstract summary: We introduce Hyperdimensional Efficient Active Learning (HEAL), a novel Active Learning framework tailored for HDC classification.
HEAL proactively annotates unlabeled data points via uncertainty and diversity-guided acquisition, leading to a more efficient dataset annotation and lowering labor costs.
Our evaluation shows that HEAL surpasses a diverse set of baselines in AL quality and achieves notably faster acquisition than many BNN-powered or diversity-guided AL methods.
- Score: 13.648600396116539
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drawing inspiration from the outstanding learning capability of our human
brains, Hyperdimensional Computing (HDC) emerges as a novel computing paradigm,
and it leverages high-dimensional vector presentation and operations for
brain-like lightweight Machine Learning (ML). Practical deployments of HDC have
significantly enhanced the learning efficiency compared to current deep ML
methods on a broad spectrum of applications. However, boosting the data
efficiency of HDC classifiers in supervised learning remains an open question.
In this paper, we introduce Hyperdimensional Efficient Active Learning (HEAL),
a novel Active Learning (AL) framework tailored for HDC classification. HEAL
proactively annotates unlabeled data points via uncertainty and
diversity-guided acquisition, leading to a more efficient dataset annotation
and lowering labor costs. Unlike conventional AL methods that only support
classifiers built upon deep neural networks (DNN), HEAL operates without the
need for gradient or probabilistic computations. This allows it to be
effortlessly integrated with any existing HDC classifier architecture. The key
design of HEAL is a novel approach for uncertainty estimation in HDC
classifiers through a lightweight HDC ensemble with prior hypervectors.
Additionally, by exploiting hypervectors as prototypes (i.e., compact
representations), we develop an extra metric for HEAL to select diverse samples
within each batch for annotation. Our evaluation shows that HEAL surpasses a
diverse set of baselines in AL quality and achieves notably faster acquisition
than many BNN-powered or diversity-guided AL methods, recording 11 times to
40,000 times speedup in acquisition runtime per batch.
Related papers
- Long-tailed Medical Diagnosis with Relation-aware Representation Learning and Iterative Classifier Calibration [14.556686415877602]
We propose a new Long-tailed Medical Diagnosis (LMD) framework for balanced medical image classification on long-tailed datasets.
Our framework significantly surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2025-02-05T14:57:23Z) - Artificial Liver Classifier: A New Alternative to Conventional Machine Learning Models [4.395397502990339]
This paper introduces the Artificial Liver (ALC), a novel supervised learning classifier inspired by the human liver's detoxification function.
The ALC is characterized by its simplicity, speed, hyperparameters-free, ability to reduce overfitting, and effectiveness in addressing multi-classification problems.
It was evaluated on five benchmark machine learning datasets: Iris Flower, Breast Cancer Wisconsin, Wine, Voice Gender, and MNIST.
arXiv Detail & Related papers (2025-01-14T12:42:01Z) - Active Data Curation Effectively Distills Large-Scale Multimodal Models [66.23057263509027]
Knowledge distillation (KD) is the de facto standard for compressing large-scale models into smaller ones.
In this work we explore an alternative, yet simple approach -- active data curation as effective distillation for contrastive multimodal pretraining.
Our simple online batch selection method, ACID, outperforms strong KD baselines across various model-, data- and compute-configurations.
arXiv Detail & Related papers (2024-11-27T18:50:15Z) - Evolutionary Optimization of 1D-CNN for Non-contact Respiration Pattern Classification [0.19999259391104385]
We present a deep learning-based approach for time-series respiration data classification.
We employed a 1D convolutional neural network (1D-CNN) for classification purposes.
Genetic algorithm was employed to optimize the 1D-CNN architecture to maximize classification accuracy.
arXiv Detail & Related papers (2023-12-20T13:59:43Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices [17.976792694929063]
We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
arXiv Detail & Related papers (2022-03-09T17:20:12Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.