Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading
- URL: http://arxiv.org/abs/2002.09565v4
- Date: Fri, 29 Oct 2021 20:06:54 GMT
- Title: Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading
- Authors: Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein
- Abstract summary: We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
- Score: 55.30403936506338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic trading systems are often completely automated, and deep learning
is increasingly receiving attention in this domain. Nonetheless, little is
known about the robustness properties of these models. We study valuation
models for algorithmic trading from the perspective of adversarial machine
learning. We introduce new attacks specific to this domain with size
constraints that minimize attack costs. We further discuss how these attacks
can be used as an analysis tool to study and evaluate the robustness properties
of financial models. Finally, we investigate the feasibility of realistic
adversarial attacks in which an adversarial trader fools automated trading
systems into making inaccurate predictions.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - MISLEAD: Manipulating Importance of Selected features for Learning Epsilon in Evasion Attack Deception [0.35998666903987897]
evasion attacks manipulate models by introducing precise perturbations to input data, causing erroneous predictions.
Our approach begins with SHAP-based analysis to understand model vulnerabilities, crucial for devising targeted evasion strategies.
The Optimal Epsilon technique, employing a Binary Search algorithm, efficiently determines the minimum epsilon needed for successful evasion.
arXiv Detail & Related papers (2024-04-24T05:22:38Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Support Vector Machines under Adversarial Label Contamination [13.299257835329868]
We evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks.
In particular, we consider an attacker that aims to formalize the SVM's classification error by flipping a number of labels.
We argue that our approach can also provide useful insights for developing more secure SVM learning algorithms.
arXiv Detail & Related papers (2022-06-01T09:38:07Z) - A Tutorial on Adversarial Learning Attacks and Countermeasures [0.0]
A machine learning model is capable of making highly accurate predictions without being explicitly programmed to do so.
adversarial learning attacks pose a serious security threat that greatly undermines further such systems.
This paper provides a detailed tutorial on the principles of adversarial learning, explains the different attack scenarios, and gives an in-depth insight into the state-of-art defense mechanisms against this rising threat.
arXiv Detail & Related papers (2022-02-21T17:14:45Z) - Adversarial attacks against Bayesian forecasting dynamic models [1.8275108630751844]
AML studies how to manipulate data to fool inference engines.
In this paper, we propose a decision analysis based attacking strategy against Bayesian forecasting dynamic models.
arXiv Detail & Related papers (2021-10-20T21:23:45Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Taking Over the Stock Market: Adversarial Perturbations Against
Algorithmic Traders [47.32228513808444]
We present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques.
We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points.
arXiv Detail & Related papers (2020-10-19T06:28:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.