Benchmarking adversarial attacks and defenses for time-series data
- URL: http://arxiv.org/abs/2008.13261v1
- Date: Sun, 30 Aug 2020 20:03:35 GMT
- Title: Benchmarking adversarial attacks and defenses for time-series data
- Authors: Shoaib Ahmed Siddiqui, Andreas Dengel, Sheraz Ahmed
- Abstract summary: We perform detailed benchmarking of well-proven adversarial defense methodologies on time-series data.
Our analysis shows that the explored adversarial defenses offer robustness against both strong white-box as well as black-box attacks.
- Score: 5.8154704910587665
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The adversarial vulnerability of deep networks has spurred the interest of
researchers worldwide. Unsurprisingly, like images, adversarial examples also
translate to time-series data as they are an inherent weakness of the model
itself rather than the modality. Several attempts have been made to defend
against these adversarial attacks, particularly for the visual modality. In
this paper, we perform detailed benchmarking of well-proven adversarial defense
methodologies on time-series data. We restrict ourselves to the $L_{\infty}$
threat model. We also explore the trade-off between smoothness and clean
accuracy for regularization-based defenses to better understand the trade-offs
that they offer. Our analysis shows that the explored adversarial defenses
offer robustness against both strong white-box as well as black-box attacks.
This paves the way for future research in the direction of adversarial attacks
and defenses, particularly for time-series data.
Related papers
- On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - Adversarial Attacks Neutralization via Data Set Randomization [3.655021726150369]
Adversarial attacks on deep learning models pose a serious threat to their reliability and security.
We propose a new defense mechanism that is rooted on hyperspace projection.
We show that our solution increases the robustness of deep learning models against adversarial attacks.
arXiv Detail & Related papers (2023-06-21T10:17:55Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Adversarial Vulnerability of Temporal Feature Networks for Object
Detection [5.525433572437716]
We study whether temporal feature networks for object detection are vulnerable to universal adversarial attacks.
We evaluate attacks of two types: imperceptible noise for the whole image and locally-bound adversarial patch.
Our experiments on KITTI and nuScenes datasets demonstrate, that a model robustified via K-PGD is able to withstand the studied attacks.
arXiv Detail & Related papers (2022-08-23T07:08:54Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Untargeted, Targeted and Universal Adversarial Attacks and Defenses on
Time Series [0.0]
We have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets.
Our results show that deep learning based time series classification models are vulnerable to these attacks.
We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data.
arXiv Detail & Related papers (2021-01-13T13:00:51Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.