Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based
Classifiers
- URL: http://arxiv.org/abs/2006.05594v1
- Date: Wed, 10 Jun 2020 01:09:30 GMT
- Title: Adversarial Attacks on Brain-Inspired Hyperdimensional Computing-Based
Classifiers
- Authors: Fangfang Yang and Shaolei Ren
- Abstract summary: Hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors to represent features and perform classification tasks.
They have been recognized as an appealing alternative to or even replacement of traditional deep neural networks (DNNs) for local on device classification.
However, state-of-the-art designs for HDC classifiers are mostly security-oblivious, casting doubt on their safety and immunity to adversarial inputs.
- Score: 15.813045384664441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being an emerging class of in-memory computing architecture, brain-inspired
hyperdimensional computing (HDC) mimics brain cognition and leverages random
hypervectors (i.e., vectors with a dimensionality of thousands or even more) to
represent features and to perform classification tasks. The unique hypervector
representation enables HDC classifiers to exhibit high energy efficiency, low
inference latency and strong robustness against hardware-induced bit errors.
Consequently, they have been increasingly recognized as an appealing
alternative to or even replacement of traditional deep neural networks (DNNs)
for local on device classification, especially on low-power Internet of Things
devices. Nonetheless, unlike their DNN counterparts, state-of-the-art designs
for HDC classifiers are mostly security-oblivious, casting doubt on their
safety and immunity to adversarial inputs. In this paper, we study for the
first time adversarial attacks on HDC classifiers and highlight that HDC
classifiers can be vulnerable to even minimally-perturbed adversarial samples.
Concretely, using handwritten digit classification as an example, we construct
a HDC classifier and formulate a grey-box attack problem, where an attacker's
goal is to mislead the target HDC classifier to produce erroneous prediction
labels while keeping the amount of added perturbation noise as little as
possible. Then, we propose a modified genetic algorithm to generate adversarial
samples within a reasonably small number of queries. Our results show that
adversarial images generated by our algorithm can successfully mislead the HDC
classifier to produce wrong prediction labels with a high probability (i.e.,
78% when the HDC classifier uses a fixed majority rule for decision). Finally,
we also present two defense strategies -- adversarial training and retraining--
to strengthen the security of HDC classifiers.
Related papers
- Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Towards Robust Domain Generation Algorithm Classification [1.4542411354617986]
We implement 32 white-box attacks, 19 of which are very effective and induce a false-negative rate (FNR) of $approx$ 100% on unhardened classifiers.
We propose a novel training scheme that leverages adversarial latent space vectors and discretized adversarial domains to significantly improve robustness.
arXiv Detail & Related papers (2024-04-09T11:56:29Z) - HEAL: Brain-inspired Hyperdimensional Efficient Active Learning [13.648600396116539]
We introduce Hyperdimensional Efficient Active Learning (HEAL), a novel Active Learning framework tailored for HDC classification.
HEAL proactively annotates unlabeled data points via uncertainty and diversity-guided acquisition, leading to a more efficient dataset annotation and lowering labor costs.
Our evaluation shows that HEAL surpasses a diverse set of baselines in AL quality and achieves notably faster acquisition than many BNN-powered or diversity-guided AL methods.
arXiv Detail & Related papers (2024-02-17T08:41:37Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Robust-by-Design Classification via Unitary-Gradient Neural Networks [66.17379946402859]
The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks.
Knowing the minimal adversarial perturbation of any input x, or, equivalently, the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions.
A novel network architecture named Unitary-Gradient Neural Network is presented.
Experimental results show that the proposed architecture approximates a signed distance, hence allowing an online certifiable classification of x at the cost of a single inference.
arXiv Detail & Related papers (2022-09-09T13:34:51Z) - EnHDC: Ensemble Learning for Brain-Inspired Hyperdimensional Computing [2.7462881838152913]
This paper presents the first effort in exploring ensemble learning in the context of hyperdimensional computing.
We propose the first ensemble HDC model referred to as EnHDC.
We show that EnHDC can achieve on average 3.2% accuracy improvement over a single HDC classifier.
arXiv Detail & Related papers (2022-03-25T09:54:00Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.