SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time
Series
- URL: http://arxiv.org/abs/2309.02752v1
- Date: Wed, 6 Sep 2023 06:17:35 GMT
- Title: SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time
Series
- Authors: Chang George Dong, Liangwei Nathan Zheng, Weitong Chen, Wei Emma
Zhang, Lin Yue
- Abstract summary: Time series classification (TSC) has emerged as a critical task in various domains.
Deep neural models have shown superior performance in TSC tasks.
TSC models are vulnerable to adversarial attacks.
We propose SWAP, a novel attacking method for TSC models.
- Score: 11.356275885051442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series classification (TSC) has emerged as a critical task in various
domains, and deep neural models have shown superior performance in TSC tasks.
However, these models are vulnerable to adversarial attacks, where subtle
perturbations can significantly impact the prediction results. Existing
adversarial methods often suffer from over-parameterization or random logit
perturbation, hindering their effectiveness. Additionally, increasing the
attack success rate (ASR) typically involves generating more noise, making the
attack more easily detectable. To address these limitations, we propose SWAP, a
novel attacking method for TSC models. SWAP focuses on enhancing the confidence
of the second-ranked logits while minimizing the manipulation of other logits.
This is achieved by minimizing the Kullback-Leibler divergence between the
target logit distribution and the predictive logit distribution. Experimental
results demonstrate that SWAP achieves state-of-the-art performance, with an
ASR exceeding 50% and an 18% increase compared to existing methods.
Related papers
- Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - A Practical Upper Bound for the Worst-Case Attribution Deviations [21.341303776931532]
Model attribution is a critical component of deep neural networks (DNNs) for its interpretability to complex models.
Recent studies bring up attention to the security of attribution methods as they are vulnerable to attribution attacks that generate similar images with dramatically different attributions.
Existing works have been investigating empirically improving the robustness of DNNs against those attacks; however, none of them explicitly quantifies the actual deviations of attributions.
In this work, for the first time, a constrained optimization problem is formulated to derive an upper bound that measures the largest dissimilarity of attributions after the samples are perturbed by any noises within a certain region
arXiv Detail & Related papers (2023-03-01T09:07:27Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.