Targeted Attacks on Timeseries Forecasting
- URL: http://arxiv.org/abs/2301.11544v1
- Date: Fri, 27 Jan 2023 06:09:42 GMT
- Title: Targeted Attacks on Timeseries Forecasting
- Authors: Yuvaraj Govindarajulu, Avinash Amballa, Pavan Kulkarni, and Manojkumar
Parmar
- Abstract summary: We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
- Score: 0.6719751155411076
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Real-world deep learning models developed for Time Series Forecasting are
used in several critical applications ranging from medical devices to the
security domain. Many previous works have shown how deep learning models are
prone to adversarial attacks and studied their vulnerabilities. However, the
vulnerabilities of time series models for forecasting due to adversarial inputs
are not extensively explored. While the attack on a forecasting model might aim
to deteriorate the performance of the model, it is more effective, if the
attack is focused on a specific impact on the model's output. In this paper, we
propose a novel formulation of Directional, Amplitudinal, and Temporal targeted
adversarial attacks on time series forecasting models. These targeted attacks
create a specific impact on the amplitude and direction of the output
prediction. We use the existing adversarial attack techniques from the computer
vision domain and adapt them for time series. Additionally, we propose a
modified version of the Auto Projected Gradient Descent attack for targeted
attacks. We examine the impact of the proposed targeted attacks versus
untargeted attacks. We use KS-Tests to statistically demonstrate the impact of
the attack. Our experimental results show how targeted attacks on time series
models are viable and are more powerful in terms of statistical similarity. It
is, hence difficult to detect through statistical methods. We believe that this
work opens a new paradigm in the time series forecasting domain and represents
an important consideration for developing better defenses.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.