Training a HyperDimensional Computing Classifier using a Threshold on
its Confidence
- URL: http://arxiv.org/abs/2305.19007v2
- Date: Thu, 30 Nov 2023 12:54:59 GMT
- Title: Training a HyperDimensional Computing Classifier using a Threshold on
its Confidence
- Authors: Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang and Steven Latre
- Abstract summary: This article proposes to extend the training procedure in HDC by taking into account not only wrongly classified samples, but also samples that are correctly classified by the HDC model but with low confidence.
The proposed training procedure is tested on UCIHAR, CTG, ISOLET and HAND dataset for which the performance consistently improves compared to the baseline.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperdimensional computing (HDC) has become popular for light-weight and
energy-efficient machine learning, suitable for wearable Internet-of-Things
(IoT) devices and near-sensor or on-device processing. HDC is computationally
less complex than traditional deep learning algorithms and achieves moderate to
good classification performance. This article proposes to extend the training
procedure in HDC by taking into account not only wrongly classified samples,
but also samples that are correctly classified by the HDC model but with low
confidence. As such, a confidence threshold is introduced that can be tuned for
each dataset to achieve the best classification accuracy. The proposed training
procedure is tested on UCIHAR, CTG, ISOLET and HAND dataset for which the
performance consistently improves compared to the baseline across a range of
confidence threshold values. The extended training procedure also results in a
shift towards higher confidence values of the correctly classified samples
making the classifier not only more accurate but also more confident about its
predictions.
Related papers
- C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets [19.318945675529456]
We introduce textbfConformal Adapter (C-Adapter) to enhance the efficiency of conformal predictors without sacrificing accuracy.
In particular, we implement the adapter as a class of intra order-preserving functions and tune it with our proposed loss.
Using C-Adapter, the model tends to produce extremely high non-conformity scores for incorrect labels.
arXiv Detail & Related papers (2024-10-12T07:28:54Z) - CALICO: Confident Active Learning with Integrated Calibration [11.978551396144532]
We propose an AL framework that self-calibrates the confidence used for sample selection during the training process.
We show improved classification performance compared to a softmax-based classifier with fewer labeled samples.
arXiv Detail & Related papers (2024-07-02T15:05:19Z) - Evaluating Classifier Confidence for Surface EMG Pattern Recognition [4.56877715768796]
Surface electromyogram (EMG) can be employed as an interface signal for various devices and software via pattern recognition.
The aim of this paper is to identify the types of classifiers that provide higher accuracy and better confidence in EMG pattern recognition.
arXiv Detail & Related papers (2023-04-12T15:05:25Z) - First steps towards quantum machine learning applied to the
classification of event-related potentials [68.8204255655161]
Low information transfer rate is a major bottleneck for brain-computer interfaces based on non-invasive electroencephalography (EEG) for clinical applications.
In this study, we investigate the performance of quantum-enhanced support vector classifier (QSVC)
Training (predicting) balanced accuracy of QSVC was 83.17 (50.25) %.
arXiv Detail & Related papers (2023-02-06T09:43:25Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - Learning Optimal Conformal Classifiers [32.68483191509137]
Conformal prediction (CP) is used to predict confidence sets containing the true class with a user-specified probability.
This paper explores strategies to differentiate through CP during training with the goal of training model with the conformal wrapper end-to-end.
We show that conformal training (ConfTr) outperforms state-of-the-art CP methods for classification by reducing the average confidence set size.
arXiv Detail & Related papers (2021-10-18T11:25:33Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Calibrating Deep Neural Network Classifiers on Out-of-Distribution
Datasets [20.456742449675904]
CCAC (Confidence with an Auxiliary Class) is a new post-hoc confidence calibration method for deep neural network (DNN)
Key novelty of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones.
Our experiments on different DNN models, datasets and applications show that CCAC can consistently outperform the prior post-hoc calibration methods.
arXiv Detail & Related papers (2020-06-16T04:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.