On Adversarial Vulnerability of PHM algorithms: An Initial Study
- URL: http://arxiv.org/abs/2110.07462v1
- Date: Thu, 14 Oct 2021 15:35:41 GMT
- Title: On Adversarial Vulnerability of PHM algorithms: An Initial Study
- Authors: Weizhong Yan, Zhaoyuan Yang, Jianwei Qiu
- Abstract summary: We investigate the strategies of attacking PHM algorithms by considering several unique characteristics associated with time-series sensor measurements data.
We use two real-world PHM applications as examples to validate our attack strategies and to demonstrate that PHM algorithms indeed are vulnerable to adversarial attacks.
- Score: 2.2559617939136505
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With proliferation of deep learning (DL) applications in diverse domains,
vulnerability of DL models to adversarial attacks has become an increasingly
interesting research topic in the domains of Computer Vision (CV) and Natural
Language Processing (NLP). DL has also been widely adopted to diverse PHM
applications, where data are primarily time-series sensor measurements. While
those advanced DL algorithms/models have resulted in an improved PHM
algorithms' performance, the vulnerability of those PHM algorithms to
adversarial attacks has not drawn much attention in the PHM community. In this
paper we attempt to explore the vulnerability of PHM algorithms. More
specifically, we investigate the strategies of attacking PHM algorithms by
considering several unique characteristics associated with time-series sensor
measurements data. We use two real-world PHM applications as examples to
validate our attack strategies and to demonstrate that PHM algorithms indeed
are vulnerable to adversarial attacks.
Related papers
- Enhancing robustness of data-driven SHM models: adversarial training with circle loss [4.619717316983647]
Structural health monitoring (SHM) is critical to safeguarding the safety and reliability of aerospace, civil, and mechanical infrastructure.
Machine learning-based data-driven approaches have gained popularity in SHM due to advancements in sensors and computational power.
In this paper, we propose an adversarial training method for defense, which uses circle loss to optimize the distance between features in training to keep examples away from the decision boundary.
arXiv Detail & Related papers (2024-06-20T11:55:39Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Defending Pre-trained Language Models as Few-shot Learners against
Backdoor Attacks [72.03945355787776]
We advocate MDP, a lightweight, pluggable, and effective defense for PLMs as few-shot learners.
We show analytically that MDP creates an interesting dilemma for the attacker to choose between attack effectiveness and detection evasiveness.
arXiv Detail & Related papers (2023-09-23T04:41:55Z) - RobustPdM: Designing Robust Predictive Maintenance against Adversarial
Attacks [0.0]
We show that adversarial attacks can cause a severe defect (up to 11X) in the RUL prediction, outperforming the effectiveness of the state-of-the-art PdM attacks by 3X.
We also present a novel approximate adversarial training method to defend against adversarial attacks.
arXiv Detail & Related papers (2023-01-25T20:49:12Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - A Compact Deep Learning Model for Face Spoofing Detection [4.250231861415827]
presentation attack detection (PAD) has received significant attention from research communities.
We address the problem via fusing both wide and deep features in a unified neural architecture.
The procedure is done on different spoofing datasets such as ROSE-Youtu, SiW and NUAA Imposter.
arXiv Detail & Related papers (2021-01-12T21:20:09Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Adversarial Examples in Deep Learning for Multivariate Time Series
Regression [0.0]
This work explores the vulnerability of deep learning (DL) regression models to adversarial time series examples.
We craft adversarial time series examples for CNN, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)
The obtained results show that all the evaluated DL regression models are vulnerable to adversarial attacks, transferable, and thus can lead to catastrophic consequences.
arXiv Detail & Related papers (2020-09-24T19:09:37Z) - Crafting Adversarial Examples for Deep Learning Based Prognostics
(Extended Version) [0.0]
State-of-the-art Prognostics and Health Management (PHM) systems incorporate Deep Learning (DL) algorithms and Internet of Things (IoT) devices.
In this paper, we adopt the adversarial example crafting techniques from the computer vision domain and apply them to the PHM domain.
We evaluate the impact of adversarial attacks using NASA's turbofan engine dataset.
arXiv Detail & Related papers (2020-09-21T19:43:38Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.