Large-Margin Hyperdimensional Computing: A Learning-Theoretical Perspective
- URL: http://arxiv.org/abs/2603.03830v1
- Date: Wed, 04 Mar 2026 08:29:07 GMT
- Title: Large-Margin Hyperdimensional Computing: A Learning-Theoretical Perspective
- Authors: Nikita Zeulin, Olga Galinina, Ravikumar Balakrishnan, Nageen Himayat, Sergey Andreev,
- Abstract summary: Hyperdimensional computing (HDC) is an emerging resource efficient and low-complexity machine learning method.<n>We propose a maximum-margin HDC classifier, which significantly outperforms baseline HDC methods on several benchmark datasets.
- Score: 4.132339396232777
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Overparameterized machine learning (ML) methods such as neural networks may be prohibitively resource intensive for devices with limited computational capabilities. Hyperdimensional computing (HDC) is an emerging resource efficient and low-complexity ML method that allows hardware efficient implementations of (re-)training and inference procedures. In this paper, we propose a maximum-margin HDC classifier, which significantly outperforms baseline HDC methods on several benchmark datasets. Our method leverages a formal relation between HDC and support vector machines (SVMs) that we established for the first time. Our findings may inspire novel HDC methods with potentially more hardware-oriented implementations compared to SVMs, thus enabling more efficient learning solutions for various intelligent resource-constrained applications.
Related papers
- HD-CB: The First Exploration of Hyperdimensional Computing for Contextual Bandits Problems [0.6377289349842638]
This work introduces the Hyperdimensional Contextual Bandits (HD-CB)<n>HD-CB is the first exploration of HDC to model and automate sequential decision-making problems.<n>It consistently achieves competitive or superior performance compared to traditional linear CB algorithms.
arXiv Detail & Related papers (2025-01-28T11:28:09Z) - HEAL: Brain-inspired Hyperdimensional Efficient Active Learning [13.648600396116539]
We introduce Hyperdimensional Efficient Active Learning (HEAL), a novel Active Learning framework tailored for HDC classification.
HEAL proactively annotates unlabeled data points via uncertainty and diversity-guided acquisition, leading to a more efficient dataset annotation and lowering labor costs.
Our evaluation shows that HEAL surpasses a diverse set of baselines in AL quality and achieves notably faster acquisition than many BNN-powered or diversity-guided AL methods.
arXiv Detail & Related papers (2024-02-17T08:41:37Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Multi-class Support Vector Machine with Maximizing Minimum Margin [60.06805919852749]
Support Vector Machine (SVM) is a prominent machine learning technique widely applied in pattern recognition tasks.<n>We propose a novel method for multi-class SVM that incorporates pairwise class loss considerations and maximizes the minimum margin.<n> Empirical evaluations demonstrate the effectiveness and superiority of our proposed method over existing multi-classification methods.
arXiv Detail & Related papers (2023-12-11T18:09:55Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Resource-Efficient Federated Hyperdimensional Computing [6.778675369739912]
In conventional hyperdimensional computing (HDC), training larger models usually results in higher predictive performance but also requires more computational, communication, and energy resources.
A proposed resource-efficient framework alleviates such constraints by training multiple smaller independent HDC sub-models.
Our numerical comparison demonstrates that the proposed framework achieves a comparable or higher predictive performance while consuming less computational and wireless resources.
arXiv Detail & Related papers (2023-06-02T08:07:14Z) - A Brain-Inspired Low-Dimensional Computing Classifier for Inference on
Tiny Devices [17.976792694929063]
We propose a low-dimensional computing (LDC) alternative to hyperdimensional computing (HDC)
We map our LDC classifier into a neural equivalent network and optimize our model using a principled training approach.
Our LDC classifier offers an overwhelming advantage over the existing brain-inspired HDC models and is particularly suitable for inference on tiny devices.
arXiv Detail & Related papers (2022-03-09T17:20:12Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.