Universal Adversarial Attack on Deep Learning Based Prognostics
- URL: http://arxiv.org/abs/2109.07142v1
- Date: Wed, 15 Sep 2021 08:05:16 GMT
- Title: Universal Adversarial Attack on Deep Learning Based Prognostics
- Authors: Arghya Basak, Pradeep Rathore, Sri Harsha Nistala, Sagar Srinivas,
Venkataramana Runkana
- Abstract summary: We present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models.
We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model.
We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning-based time series models are being extensively utilized in
engineering and manufacturing industries for process control and optimization,
asset monitoring, diagnostic and predictive maintenance. These models have
shown great improvement in the prediction of the remaining useful life (RUL) of
industrial equipment but suffer from inherent vulnerability to adversarial
attacks. These attacks can be easily exploited and can lead to catastrophic
failure of critical industrial equipment. In general, different adversarial
perturbations are computed for each instance of the input data. This is,
however, difficult for the attacker to achieve in real time due to higher
computational requirement and lack of uninterrupted access to the input data.
Hence, we present the concept of universal adversarial perturbation, a special
imperceptible noise to fool regression based RUL prediction models. Attackers
can easily utilize universal adversarial perturbations for real-time attack
since continuous access to input data and repetitive computation of adversarial
perturbations are not a prerequisite for the same. We evaluate the effect of
universal adversarial attacks using NASA turbofan engine dataset. We show that
addition of universal adversarial perturbation to any instance of the input
data increases error in the output predicted by the model. To the best of our
knowledge, we are the first to study the effect of the universal adversarial
perturbation on time series regression models. We further demonstrate the
effect of varying the strength of perturbations on RUL prediction models and
found that model accuracy decreases with the increase in perturbation strength
of the universal adversarial attack. We also showcase that universal
adversarial perturbation can be transferred across different models.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures [0.9217021281095907]
We investigate the impact of adversarial attacks on time-series forecasting.
We employ untargeted white-box attacks to poison the inputs to the training process, effectively misleading the model.
Having demonstrated the feasibility of these attacks, we develop robust models through adversarial training and model hardening.
arXiv Detail & Related papers (2024-08-27T08:44:31Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models [4.286570387250455]
deep learning (DL) models can effectively learn city-wide crowd-flow patterns.
DL models have been known to perform poorly on inconspicuous adversarial perturbations.
arXiv Detail & Related papers (2023-03-05T13:30:25Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - Physical Passive Patch Adversarial Attacks on Visual Odometry Systems [6.391337032993737]
We study patch adversarial attacks on visual odometry-based autonomous navigation systems.
We show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene.
arXiv Detail & Related papers (2022-07-11T14:41:06Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - More Data Can Expand the Generalization Gap Between Adversarially Robust
and Standard Models [37.85664398110855]
Modern machine learning models are susceptible to adversarial attacks that make human-imperceptibles to the data, but result in serious and potentially dangerous prediction errors.
To address this issue, practitioners often use adversarial training to learn models that are robust against such attacks at the cost of higher generalization error on unperturbed test sets.
We study the training of robust classifiers for both Gaussian and Bernoulli models under $ell_infty$ attacks, and we prove that more data may actually increase this gap.
arXiv Detail & Related papers (2020-02-11T23:01:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.