Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version)
- URL: http://arxiv.org/abs/2009.10149v2
- Date: Mon, 28 Sep 2020 15:26:35 GMT
- Title: Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version)
- Authors: Gautam Raj Mode, Khaza Anuarul Hoque
- Abstract summary: State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices.
In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain.
We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In manufacturing, unexpected failures are considered a primary operational
risk, as they can hinder productivity and can incur huge losses.
State-of-the-art Prognostics and Health Management (PHM) systems incorporate
Deep Learning (DL) algorithms and Internet of Things (IoT) devices to ascertain
the health status of equipment, and thus reduce the downtime, maintenance cost
and increase the productivity. Unfortunately, IoT sensors and DL algorithms,
both are vulnerable to cyber attacks, and hence pose a significant threat to
PHM systems. In this paper, we adopt the adversarial example crafting
techniques from the computer vision domain and apply them to the PHM domain.
Specifically, we craft adversarial examples using the Fast Gradient Sign Method
(FGSM) and Basic Iterative Method (BIM) and apply them on the Long Short-Term
Memory (LSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network
(CNN) based PHM models. We evaluate the impact of adversarial attacks using
NASA's turbofan engine dataset. The obtained results show that all the
evaluated PHM models are vulnerable to adversarial attacks and can cause a
serious defect in the remaining useful life estimation. The obtained results
also show that the crafted adversarial examples are highly transferable and may
cause significant damages to PHM systems.
Related papers
- Enhancing robustness of data-driven SHM models: adversarial training with circle loss [4.619717316983647]
Structural health monitoring (SHM) is critical to safeguarding the safety and reliability of aerospace, civil, and mechanical infrastructure.
Machine learning-based data-driven approaches have gained popularity in SHM due to advancements in sensors and computational power.
In this paper, we propose an adversarial training method for defense, which uses circle loss to optimize the distance between features in training to keep examples away from the decision boundary.
arXiv Detail & Related papers (2024-06-20T11:55:39Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks [2.389598109913753]
Adversarial attacks have proven to be a significant threat in domains such as computer vision and speech recognition.
We investigate the Fast Gradient Sign Method (FGSM) to perturb the input sequences fed into two commonly employed CNN-based NILM baselines.
Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score.
arXiv Detail & Related papers (2023-07-14T13:10:01Z) - RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks [0.0]
We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
arXiv Detail & Related papers (2023-01-25T20:49:12Z) - DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics [8.697883716452385]
We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
arXiv Detail & Related papers (2023-01-23T22:10:40Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - On Adversarial Vulnerability of PHM algorithms: An Initial Study [2.2559617939136505]
We investigate the strategies of attacking PHM algorithms by considering several unique characteristics associated with time-series sensor measurements data.
We use two real-world PHM applications as examples to validate our attack strategies and to demonstrate that PHM algorithms indeed are vulnerable to adversarial attacks.
arXiv Detail & Related papers (2021-10-14T15:35:41Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.