Universal Adversarial Attack on Deep Learning Based Prognostics
- URL: http://arxiv.org/abs/2109.07142v1
- Date: Wed, 15 Sep 2021 08:05:16 GMT
- Title: Universal Adversarial Attack on Deep Learning Based Prognostics
- Authors: Arghya Basak, Pradeep Rathore, Sri Harsha Nistala, Sagar Srinivas,
Venkataramana Runkana
- Abstract summary: We present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models.
We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model.
We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning-based time series models are being extensively utilized in
engineering and manufacturing industries for process control and optimization,
asset monitoring, diagnostic and predictive maintenance. These models have
shown great improvement in the prediction of the remaining useful life (RUL) of
industrial equipment but suffer from inherent vulnerability to adversarial
attacks. These attacks can be easily exploited and can lead to catastrophic
failure of critical industrial equipment. In general, different adversarial
perturbations are computed for each instance of the input data. This is,
however, difficult for the attacker to achieve in real time due to higher
computational requirement and lack of uninterrupted access to the input data.
Hence, we present the concept of universal adversarial perturbation, a special
imperceptible noise to fool regression based RUL prediction models. Attackers
can easily utilize universal adversarial perturbations for real-time attack
since continuous access to input data and repetitive computation of adversarial
perturbations are not a prerequisite for the same. We evaluate the effect of
universal adversarial attacks using NASA turbofan engine dataset. We show that
addition of universal adversarial perturbation to any instance of the input
data increases error in the output predicted by the model. To the best of our
knowledge, we are the first to study the effect of the universal adversarial
perturbation on time series regression models. We further demonstrate the
effect of varying the strength of perturbations on RUL prediction models and
found that model accuracy decreases with the increase in perturbation strength
of the universal adversarial attack. We also showcase that universal
adversarial perturbation can be transferred across different models.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Consistent Valid Physically-Realizable Adversarial Attack against
Crowd-flow Prediction Models [4.286570387250455]
deep learning (DL) models can effectively learn city-wide crowd-flow patterns.
DL models have been known to perform poorly on inconspicuous adversarial perturbations.
arXiv Detail & Related papers (2023-03-05T13:30:25Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - Physical Passive Patch Adversarial Attacks on Visual Odometry Systems [6.391337032993737]
We study patch adversarial attacks on visual odometry-based autonomous navigation systems.
We show for the first time that the error margin of a visual odometry model can be significantly increased by deploying patch adversarial attacks in the scene.
arXiv Detail & Related papers (2022-07-11T14:41:06Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Jacobian Regularization for Mitigating Universal Adversarial
Perturbations [2.9465623430708905]
Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data.
We derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians.
arXiv Detail & Related papers (2021-04-21T11:00:21Z) - More Data Can Expand the Generalization Gap Between Adversarially Robust
and Standard Models [37.85664398110855]
Modern machine learning models are susceptible to adversarial attacks that make human-imperceptibles to the data, but result in serious and potentially dangerous prediction errors.
To address this issue, practitioners often use adversarial training to learn models that are robust against such attacks at the cost of higher generalization error on unperturbed test sets.
We study the training of robust classifiers for both Gaussian and Bernoulli models under $ell_infty$ attacks, and we prove that more data may actually increase this gap.
arXiv Detail & Related papers (2020-02-11T23:01:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.