Backdoor Attacks on Time Series: A Generative Approach
- URL: http://arxiv.org/abs/2211.07915v1
- Date: Tue, 15 Nov 2022 06:00:28 GMT
- Title: Backdoor Attacks on Time Series: A Generative Approach
- Authors: Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, James Bailey
- Abstract summary: We present a novel generative approach for time series backdoor attacks against deep learning based time series classifiers.
Backdoor attacks have two main goals: high stealthiness and high attack success rate.
We show that our proposed attack is resistant to potential backdoor defenses.
- Score: 33.51299834575577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Backdoor attacks have emerged as one of the major security threats to deep
learning models as they can easily control the model's test-time predictions by
pre-injecting a backdoor trigger into the model at training time. While
backdoor attacks have been extensively studied on images, few works have
investigated the threat of backdoor attacks on time series data. To fill this
gap, in this paper we present a novel generative approach for time series
backdoor attacks against deep learning based time series classifiers. Backdoor
attacks have two main goals: high stealthiness and high attack success rate. We
find that, compared to images, it can be more challenging to achieve the two
goals on time series. This is because time series have fewer input dimensions
and lower degrees of freedom, making it hard to achieve a high attack success
rate without compromising stealthiness. Our generative approach addresses this
challenge by generating trigger patterns that are as realistic as real-time
series patterns while achieving a high attack success rate without causing a
significant drop in clean accuracy. We also show that our proposed attack is
resistant to potential backdoor defenses. Furthermore, we propose a novel
universal generator that can poison any type of time series with a single
generator that allows universal attacks without the need to fine-tune the
generative model for new time series datasets.
Related papers
- A4O: All Trigger for One sample [10.78460062665304]
We show that proposed backdoor defenders often rely on the assumption that triggers would appear in a unified way.
In this paper, we show that this naive assumption can create a loophole, allowing more sophisticated backdoor attacks to bypass.
We design a novel backdoor attack mechanism that incorporates multiple types of backdoor triggers, focusing on stealthiness and effectiveness.
arXiv Detail & Related papers (2025-01-13T10:38:58Z) - Rethinking Backdoor Attacks [122.1008188058615]
In a backdoor attack, an adversary inserts maliciously constructed backdoor examples into a training set to make the resulting model vulnerable to manipulation.
Defending against such attacks typically involves viewing these inserted examples as outliers in the training set and using techniques from robust statistics to detect and remove them.
We show that without structural information about the training data distribution, backdoor attacks are indistinguishable from naturally-occurring features in the data.
arXiv Detail & Related papers (2023-07-19T17:44:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class [17.391987602738606]
In recent years, machine learning models have been shown to be vulnerable to backdoor attacks.
This paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman.
We show empirically that the proposed framework achieves high attack performance while preserving the clean-data performance in several benchmark datasets.
arXiv Detail & Related papers (2022-10-17T15:46:57Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective [10.03897682559064]
This paper revisits existing backdoor triggers from a frequency perspective and performs a comprehensive analysis.
We show that many current backdoor attacks exhibit severe high-frequency artifacts, which persist across different datasets and resolutions.
We propose a practical way to create smooth backdoor triggers without high-frequency artifacts and study their detectability.
arXiv Detail & Related papers (2021-04-07T22:05:28Z) - Untargeted, Targeted and Universal Adversarial Attacks and Defenses on
Time Series [0.0]
We have performed untargeted, targeted and universal adversarial attacks on UCR time series datasets.
Our results show that deep learning based time series classification models are vulnerable to these attacks.
We also show that universal adversarial attacks have good generalization property as it need only a fraction of the training data.
arXiv Detail & Related papers (2021-01-13T13:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.