RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks
- URL: http://arxiv.org/abs/2301.10822v2
- Date: Thu, 10 Aug 2023 17:34:58 GMT
- Title: RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks
- Authors: Ayesha Siddique, Ripan Kumar Kundu, Gautam Raj Mode, Khaza Anuarul
Hoque
- Abstract summary: We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The state-of-the-art predictive maintenance (PdM) techniques have shown great
success in reducing maintenance costs and downtime of complicated machines
while increasing overall productivity through extensive utilization of
Internet-of-Things (IoT) and Deep Learning (DL). Unfortunately, IoT sensors and
DL algorithms are both prone to cyber-attacks. For instance, DL algorithms are
known for their susceptibility to adversarial examples. Such adversarial
attacks are vastly under-explored in the PdM domain. This is because the
adversarial attacks in the computer vision domain for classification tasks
cannot be directly applied to the PdM domain for multivariate time series (MTS)
regression tasks. In this work, we propose an end-to-end methodology to design
adversarially robust PdM systems by extensively analyzing the effect of
different types of adversarial attacks and proposing a novel adversarial
defense technique for DL-enabled PdM models. First, we propose novel MTS
Projected Gradient Descent (PGD) and MTS PGD with random restarts (PGD_r)
attacks. Then, we evaluate the impact of MTS PGD and PGD_r along with MTS Fast
Gradient Sign Method (FGSM) and MTS Basic Iterative Method (BIM) on Long
Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural
Network (CNN), and Bi-directional LSTM based PdM system. Our results using
NASA's turbofan engine dataset show that adversarial attacks can cause a severe
defect (up to 11X) in the RUL prediction, outperforming the effectiveness of
the state-of-the-art PdM attacks by 3X. Furthermore, we present a novel
approximate adversarial training method to defend against adversarial attacks.
We observe that approximate adversarial training can significantly improve the
robustness of PdM models (up to 54X) and outperforms the state-of-the-art PdM
defense methods by offering 3X more robustness.
Related papers
- Robust Image Classification: Defensive Strategies against FGSM and PGD Adversarial Attacks [0.0]
Adversarial attacks pose significant threats to the robustness of deep learning models in image classification.
This paper explores and refines defense mechanisms against these attacks to enhance the resilience of neural networks.
arXiv Detail & Related papers (2024-08-20T02:00:02Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics [8.697883716452385]
We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
arXiv Detail & Related papers (2023-01-23T22:10:40Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version) [0.0]
State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices.
In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain.
We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset.
arXiv Detail & Related papers (2020-09-21T19:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.