Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis
- URL: http://arxiv.org/abs/2503.19519v1
- Date: Tue, 25 Mar 2025 10:16:51 GMT
- Title: Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis
- Authors: Wenwei Gu, Renyi Zhong, Jianping Zhang, Michael R. Lyu,
- Abstract summary: Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective.<n>This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality.
- Score: 28.85076539497076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks in time series classification (TSC) models have recently gained attention due to their potential to compromise model robustness. Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective. Many existing methods fail to produce high-quality imperceptible examples, often generating perturbations with more perceptible low-frequency components, like square waves, and global perturbations that reduce stealthiness. This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality. We propose the Shapelet-based Frequency-domain Attack (SFAttack), which uses local perturbations focused on time series shapelets to enhance discriminative information and stealthiness. Additionally, we introduce a low-frequency constraint to confine perturbations to high-frequency components, enhancing imperceptibility.
Related papers
- DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain [23.678658814438855]
adversarial training (AT) is developed to protect deep neural networks (DNNs) from adversarial attacks.
Recent studies show that adversarial attacks disproportionately impact the patterns within the phase of the sample's frequency spectrum.
We propose an optimized Adversarial Amplitude Generator (AAG) to achieve a better tradeoff between improving the model's robustness and retaining phase patterns.
arXiv Detail & Related papers (2024-10-16T07:18:36Z) - Correlation Analysis of Adversarial Attack in Time Series Classification [6.117704456424016]
This study investigates the vulnerability of time series classification models to adversarial attacks.
Regularization techniques and noise introduction are shown to enhance the effectiveness of attacks.
Models designed to prioritize global information are revealed to possess greater resistance to adversarial manipulations.
arXiv Detail & Related papers (2024-08-21T01:11:32Z) - Mitigating Low-Frequency Bias: Feature Recalibration and Frequency Attention Regularization for Adversarial Robustness [23.77988226456179]
adversarial training (AT) has emerged as a promising defense strategy.<n>AT-trained models exhibit a bias toward low-frequency features while neglecting high-frequency components.<n>We propose High-Frequency Feature Disentanglement and Recalibration (HFDR), a novel module that strategically separates and recalibrates frequency-specific features.
arXiv Detail & Related papers (2024-07-04T15:46:01Z) - Towards a Novel Perspective on Adversarial Examples Driven by Frequency [7.846634028066389]
We propose a black-box adversarial attack algorithm based on combining different frequency bands.
Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency.
arXiv Detail & Related papers (2024-04-16T00:58:46Z) - Extreme Miscalibration and the Illusion of Adversarial Robustness [66.29268991629085]
Adversarial Training is often used to increase model robustness.
We show that this observed gain in robustness is an illusion of robustness (IOR)
We urge the NLP community to incorporate test-time temperature scaling into their robustness evaluations.
arXiv Detail & Related papers (2024-02-27T13:49:12Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Frequency Domain Adversarial Training for Robust Volumetric Medical
Segmentation [111.61781272232646]
It is imperative to ensure the robustness of deep learning models in critical applications such as, healthcare.
We present a 3D frequency domain adversarial attack for volumetric medical image segmentation models.
arXiv Detail & Related papers (2023-07-14T10:50:43Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - A Frequency Perspective of Adversarial Robustness [72.48178241090149]
We present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings.
Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent.
We propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off.
arXiv Detail & Related papers (2021-10-26T19:12:34Z) - Frequency-based Automated Modulation Classification in the Presence of
Adversaries [17.930854969511046]
We present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference.
In this work, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-11-02T17:12:22Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.