RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial
Attacks Using Hyper-Dimensional Computing
- URL: http://arxiv.org/abs/2203.08148v1
- Date: Mon, 14 Mar 2022 17:59:17 GMT
- Title: RES-HD: Resilient Intelligent Fault Diagnosis Against Adversarial
Attacks Using Hyper-Dimensional Computing
- Authors: Onat Gungor, Tajana Rosing, Baris Aksanli
- Abstract summary: Hyper-dimensional computing (HDC) is a brain-inspired machine learning method.
In this work, we use HDC for intelligent fault diagnosis against different adversarial attacks.
Our experiments show that HDC leads to a more resilient and lightweight learning solution than the state-of-the-art deep learning methods.
- Score: 8.697883716452385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial Internet of Things (I-IoT) enables fully automated production
systems by continuously monitoring devices and analyzing collected data.
Machine learning methods are commonly utilized for data analytics in such
systems. Cyber-attacks are a grave threat to I-IoT as they can manipulate
legitimate inputs, corrupting ML predictions and causing disruptions in the
production systems. Hyper-dimensional computing (HDC) is a brain-inspired
machine learning method that has been shown to be sufficiently accurate while
being extremely robust, fast, and energy-efficient. In this work, we use HDC
for intelligent fault diagnosis against different adversarial attacks. Our
black-box adversarial attacks first train a substitute model and create
perturbed test instances using this trained model. These examples are then
transferred to the target models. The change in the classification accuracy is
measured as the difference before and after the attacks. This change measures
the resiliency of a learning method. Our experiments show that HDC leads to a
more resilient and lightweight learning solution than the state-of-the-art deep
learning methods. HDC has up to 67.5% higher resiliency compared to the
state-of-the-art methods while being up to 25.1% faster to train.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics [8.697883716452385]
We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
arXiv Detail & Related papers (2023-01-23T22:10:40Z) - A White-Box Adversarial Attack Against a Digital Twin [0.0]
This paper explores the susceptibility of Digital Twin (DT) to adversarial attacks.
We first formulate a DT of a vehicular system using a deep neural network architecture and then utilize it to launch an adversarial attack.
We attack the DT model by perturbing the input to the trained model and show how easily the model can be broken with white-box attacks.
arXiv Detail & Related papers (2022-10-25T13:41:02Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - Classification Auto-Encoder based Detector against Diverse Data
Poisoning Attacks [7.150136251781658]
Poisoning attacks are a category of adversarial machine learning threats.
In this paper, we propose CAE, a Classification Auto-Encoder based detector against poisoned data.
We show that an enhanced version of CAE (called CAE+) does not have to employ a clean data set to train the defense model.
arXiv Detail & Related papers (2021-08-09T17:46:52Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.