Untargeted, Targeted and Universal Adversarial Attacks and Defenses on
Time Series
- URL: http://arxiv.org/abs/2101.05639v1
- Date: Wed, 13 Jan 2021 13:00:51 GMT
- Title: Untargeted, Targeted and Universal Adversarial Attacks and Defenses on
Time Series
- Authors: Pradeep Rathore, Arghya Basak, Sri Harsha Nistala, Venkataramana
Runkana
- Abstract summary: We have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets.
Our results show that deep learning based time series classification models are vulnerable to these attacks.
We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning based models are vulnerable to adversarial attacks. These
attacks can be much more harmful in case of targeted attacks, where an attacker
tries not only to fool the deep learning model, but also to misguide the model
to predict a specific class. Such targeted and untargeted attacks are
specifically tailored for an individual sample and require addition of an
imperceptible noise to the sample. In contrast, universal adversarial attack
calculates a special imperceptible noise which can be added to any sample of
the given dataset so that, the deep learning model is forced to predict a wrong
class. To the best of our knowledge these targeted and universal attacks on
time series data have not been studied in any of the previous works. In this
work, we have performed untargeted, targeted and universal adversarial attacks
on UCR time series datasets. Our results show that deep learning based time
series classification models are vulnerable to these attacks. We also show that
universal adversarial attacks have good generalization property as it need only
a fraction of the training data. We have also performed adversarial training
based adversarial defense. Our results show that models trained adversarially
using Fast gradient sign method (FGSM), a single step attack, are able to
defend against FGSM as well as Basic iterative method (BIM), a popular
iterative attack.
Related papers
- A Review of Adversarial Attacks in Computer Vision [16.619382559756087]
Adversarial attacks can be invisible to human eyes, but can lead to deep learning misclassification.
Adversarial attacks can be divided into white-box attacks, for which the attacker knows the parameters and gradient of the model, and black-box attacks, for the latter, the attacker can only obtain the input and output of the model.
arXiv Detail & Related papers (2023-08-15T09:43:10Z) - Transferable Attack for Semantic Segmentation [59.17710830038692]
adversarial attacks, and observe that the adversarial examples generated from a source model fail to attack the target models.
We propose an ensemble attack for semantic segmentation to achieve more effective attacks with higher transferability.
arXiv Detail & Related papers (2023-07-31T11:05:55Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Manipulating SGD with Data Ordering Attacks [23.639512087220137]
We present a class of training-time attacks that require no changes to the underlying model dataset or architecture.
In particular, an attacker can disrupt the integrity and availability of a model by simply reordering training batches.
Attacks have a long-term impact in that they decrease model performance hundreds of epochs after the attack took place.
arXiv Detail & Related papers (2021-04-19T22:17:27Z) - Lagrangian Objective Function Leads to Improved Unforeseen Attack
Generalization in Adversarial Training [0.0]
Adversarial training (AT) has been shown effective to reach a robust model against the attack that is used during training.
We propose a simple modification to the AT that mitigates the mentioned issue.
We show that our attack is faster than other attack schemes that are designed for unseen attack generalization.
arXiv Detail & Related papers (2021-03-29T07:23:46Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Adversarial examples are useful too! [47.64219291655723]
I propose a new method to tell whether a model has been subject to a backdoor attack.
The idea is to generate adversarial examples, targeted or untargeted, using conventional attacks such as FGSM.
It is possible to visually locate the perturbed regions and unveil the attack.
arXiv Detail & Related papers (2020-05-13T01:38:56Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.