Adversarial attacks against Bayesian forecasting dynamic models
- URL: http://arxiv.org/abs/2110.10783v1
- Date: Wed, 20 Oct 2021 21:23:45 GMT
- Title: Adversarial attacks against Bayesian forecasting dynamic models
- Authors: Roi Naveiro
- Abstract summary: AML studies how to manipulate data to fool inference engines.
In this paper, we propose a decision analysis based attacking strategy against Bayesian forecasting dynamic models.
- Score: 1.8275108630751844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The last decade has seen the rise of Adversarial Machine Learning (AML). This
discipline studies how to manipulate data to fool inference engines, and how to
protect those systems against such manipulation attacks. Extensive work on
attacks against regression and classification systems is available, while
little attention has been paid to attacks against time series forecasting
systems. In this paper, we propose a decision analysis based attacking strategy
that could be utilized against Bayesian forecasting dynamic models.
Related papers
- Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - The Space of Adversarial Strategies [6.295859509997257]
Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade.
We propose a systematic approach to characterize worst-case (i.e., optimal) adversaries.
arXiv Detail & Related papers (2022-09-09T20:53:11Z) - Robust Multivariate Time-Series Forecasting: Adversarial Attacks and
Defense Mechanisms [17.75675910162935]
A new attack pattern negatively impacts the forecasting of a target time series.
We develop two defense strategies to mitigate the impact of such attack.
Experiments on real-world datasets confirm that our attack schemes are powerful.
arXiv Detail & Related papers (2022-07-19T22:00:41Z) - I Know What You Trained Last Summer: A Survey on Stealing Machine
Learning Models and Defences [0.1031296820074812]
We study model stealing attacks, assessing their performance and exploring corresponding defence techniques in different settings.
We propose a taxonomy for attack and defence approaches, and provide guidelines on how to select the right attack or defence based on the goal and available resources.
arXiv Detail & Related papers (2022-06-16T21:16:41Z) - Attack Prediction using Hidden Markov Model [2.2559617939136505]
We propose the use of Hidden Markov Model (HMM) to predict the family of related attacks.
We have built an HMM-based prediction model and implemented our proposed approach using Viterbi algorithm.
As a proof of concept and also to demonstrate the performance of the model, we have conducted a case study on predicting a family of attacks called Action Spoofing.
arXiv Detail & Related papers (2021-06-03T17:32:06Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Adversarial Attack and Defense of Structured Prediction Models [58.49290114755019]
In this paper, we investigate attacks and defenses for structured prediction tasks in NLP.
The structured output of structured prediction models is sensitive to small perturbations in the input.
We propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model.
arXiv Detail & Related papers (2020-10-04T15:54:03Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.