HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional
Computing
- URL: http://arxiv.org/abs/2103.08668v1
- Date: Mon, 15 Mar 2021 19:23:45 GMT
- Title: HDTest: Differential Fuzz Testing of Brain-Inspired Hyperdimensional
Computing
- Authors: Dongning Ma, Jianmin Guo, Yu Jiang, Xun Jiao
- Abstract summary: Brain-inspired hyperdimensional computing (HDC) is an emerging computational paradigm that mimics brain cognition and leverages hyperdimensional vectors with fully distributed holographic representation and (pseudo)randomness.
In this paper, we design, implement, and evaluate HDTest to test HDC model by automatically exposing unexpected or incorrect behaviors under rare inputs.
We show that HDTest can generate thousands of adversarial inputs with negligible perturbations that can successfully fool HDC models.
- Score: 6.266573115746776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain-inspired hyperdimensional computing (HDC) is an emerging computational
paradigm that mimics brain cognition and leverages hyperdimensional vectors
with fully distributed holographic representation and (pseudo)randomness.
Compared to other machine learning (ML) methods such as deep neural networks
(DNNs), HDC offers several advantages including high energy efficiency, low
latency, and one-shot learning, making it a promising alternative candidate on
a wide range of applications. However, the reliability and robustness of HDC
models have not been explored yet. In this paper, we design, implement, and
evaluate HDTest to test HDC model by automatically exposing unexpected or
incorrect behaviors under rare inputs. The core idea of HDTest is based on
guided differential fuzz testing. Guided by the distance between query
hypervector and reference hypervector in HDC, HDTest continuously mutates
original inputs to generate new inputs that can trigger incorrect behaviors of
HDC model. Compared to traditional ML testing methods, HDTest does not need to
manually label the original input. Using handwritten digit classification as an
example, we show that HDTest can generate thousands of adversarial inputs with
negligible perturbations that can successfully fool HDC models. On average,
HDTest can generate around 400 adversarial inputs within one minute running on
a commodity computer. Finally, by using the HDTest-generated inputs to retrain
HDC models, we can strengthen the robustness of HDC models. To the best of our
knowledge, this paper presents the first effort in systematically testing this
emerging brain-inspired computational model.
Related papers
- An Encoding Framework for Binarized Images using HyperDimensional
Computing [0.0]
This article proposes a novel light-weight approach to encode binarized images that preserves similarity of patterns at nearby locations.
The method reaches an accuracy of 97.35% on the test set for the MNIST data set and 84.12% for the Fashion-MNIST data set.
arXiv Detail & Related papers (2023-12-01T09:34:28Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - HDTorch: Accelerating Hyperdimensional Computing with GP-GPUs for Design
Space Exploration [4.783565770657063]
We introduce HDTorch, an open-source, PyTorch-based HDC library with extensions for hypervector operations.
We analyze four HDC benchmark datasets in terms of accuracy, runtime, and memory consumption.
We perform the first-ever HD training and inference analysis of the entirety of the CHB-MIT EEG epilepsy database.
arXiv Detail & Related papers (2022-06-09T19:46:08Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study [116.05514467222544]
Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism.
One concern raised is whether the state-of-the-art GAN's learned distribution still suffers from mode collapse.
This paper explores to diagnose GAN intra-mode collapse and calibrate that, in a novel black-box setting.
arXiv Detail & Related papers (2021-07-23T06:03:55Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - HDXplore: Automated Blackbox Testing of Brain-Inspired Hyperdimensional
Computing [2.3549478726261883]
HDC is an emerging computing scheme based on the working mechanism of brain that computes with deep and abstract patterns of neural activity instead of actual numbers.
Compared with traditional ML algorithms such as DNN, HDC is more memory-centric, granting it advantages such as relatively smaller model size, less cost, and one-shot learning.
We develop HDXplore, a blackbox differential testing-based framework to expose the unexpected or incorrect behaviors of HDC models.
arXiv Detail & Related papers (2021-05-26T18:08:52Z) - Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based
Classifiers [15.813045384664441]
Hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors to represent features and perform classification tasks.
They have been recognized as an appealing alternative to or even replacement of traditional deep neural networks (DNNs) for local on device classification.
However, state-of-the-art designs for HDC classifiers are mostly security-oblivious, casting doubt on their safety and immunity to adversarial inputs.
arXiv Detail & Related papers (2020-06-10T01:09:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.