Adversarial Examples in Deep Learning for Multivariate Time Series
Regression
- URL: http://arxiv.org/abs/2009.11911v1
- Date: Thu, 24 Sep 2020 19:09:37 GMT
- Title: Adversarial Examples in Deep Learning for Multivariate Time Series
Regression
- Authors: Gautam Raj Mode, Khaza Anuarul Hoque
- Abstract summary: This work explores the vulnerability of deep learning (DL) regression models to adversarial time series examples.
We craft adversarial time series examples for CNN, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)
The obtained results show that all the evaluated DL regression models are vulnerable to adversarial attacks, transferable, and thus can lead to catastrophic consequences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate time series (MTS) regression tasks are common in many real-world
data mining applications including finance, cybersecurity, energy, healthcare,
prognostics, and many others. Due to the tremendous success of deep learning
(DL) algorithms in various domains including image recognition and computer
vision, researchers started adopting these techniques for solving MTS data
mining problems, many of which are targeted for safety-critical and
cost-critical applications. Unfortunately, DL algorithms are known for their
susceptibility to adversarial examples which also makes the DL regression
models for MTS forecasting also vulnerable to those attacks. To the best of our
knowledge, no previous work has explored the vulnerability of DL MTS regression
models to adversarial time series examples, which is an important step,
specifically when the forecasting from such models is used in safety-critical
and cost-critical applications. In this work, we leverage existing adversarial
attack generation techniques from the image classification domain and craft
adversarial multivariate time series examples for three state-of-the-art deep
learning regression models, specifically Convolutional Neural Network (CNN),
Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). We evaluate our
study using Google stock and household power consumption dataset. The obtained
results show that all the evaluated DL regression models are vulnerable to
adversarial attacks, transferable, and thus can lead to catastrophic
consequences in safety-critical and cost-critical domains, such as energy and
finance.
Related papers
- Self-Supervised Learning for Time Series: A Review & Critique of FITS [0.0]
Recently proposed model, FITS, claims competitive performance with significantly reduced parameter counts.
By training a one-layer neural network in the complex frequency domain, we are able to replicate these results.
Our experiments reveal that FITS especially excels at capturing periodic and seasonal patterns, but struggles with trending, non-periodic, or random-resembling behavior.
arXiv Detail & Related papers (2024-10-23T23:03:09Z) - SIaM: Self-Improving Code-Assisted Mathematical Reasoning of Large Language Models [54.78329741186446]
We propose a novel paradigm that uses a code-based critic model to guide steps including question-code data construction, quality control, and complementary evaluation.
Experiments across both in-domain and out-of-domain benchmarks in English and Chinese demonstrate the effectiveness of the proposed paradigm.
arXiv Detail & Related papers (2024-08-28T06:33:03Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - A Training Rate and Survival Heuristic for Inference and Robustness Evaluation (TRASHFIRE) [1.622320874892682]
This work addresses the problem of understanding and predicting how particular model hyper- parameters influence the performance of a model in the presence of an adversary.
The proposed approach uses survival models, worst-case examples, and a cost-aware analysis to precisely and accurately reject a particular model change.
Using the proposed methodology, we show that ResNet is hopelessly against even the simplest of white box attacks.
arXiv Detail & Related papers (2024-01-24T19:12:37Z) - Temporal Knowledge Distillation for Time-Sensitive Financial Services
Applications [7.1795069620810805]
Anomaly detection is frequently used in key compliance and risk functions such as financial crime detection fraud and cybersecurity.
Keeping up with the rapid changes by retraining the models with the latest data patterns introduces pressures in balancing the historical and current patterns.
The proposed approach provides advantages in retraining times while improving the model performance.
arXiv Detail & Related papers (2023-12-28T03:04:30Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - On the Security Risks of AutoML [38.03918108363182]
Neural Architecture Search (NAS) is an emerging machine learning paradigm that automatically searches for models tailored to given tasks.
We show that compared with their manually designed counterparts, NAS-generated models tend to suffer greater vulnerability to various malicious attacks.
We discuss potential remedies to mitigate such drawbacks, including increasing cell depth and suppressing skip connects.
arXiv Detail & Related papers (2021-10-12T14:04:15Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.