Adversarial Attacks on Deep Algorithmic Trading Policies
- URL: http://arxiv.org/abs/2010.11388v1
- Date: Thu, 22 Oct 2020 02:26:29 GMT
- Title: Adversarial Attacks on Deep Algorithmic Trading Policies
- Authors: Yaser Faghan, Nancirose Piazza, Vahid Behzadan, Ali Fathi
- Abstract summary: Deep Reinforcement Learning (DRL) has become an appealing solution to algorithmic trading such as high frequency trading of stocks and cyptocurrencies.
It follows that algorithmic trading DRL agents may also be compromised by such adversarial techniques, leading to policy manipulation.
We develop a threat model for deep trading policies, and propose two attack techniques for manipulating the performance of such policies at test-time.
- Score: 8.774604259603302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Reinforcement Learning (DRL) has become an appealing solution to
algorithmic trading such as high frequency trading of stocks and
cyptocurrencies. However, DRL have been shown to be susceptible to adversarial
attacks. It follows that algorithmic trading DRL agents may also be compromised
by such adversarial techniques, leading to policy manipulation. In this paper,
we develop a threat model for deep trading policies, and propose two attack
techniques for manipulating the performance of such policies at test-time.
Furthermore, we demonstrate the effectiveness of the proposed attacks against
benchmark and real-world DQN trading agents.
Related papers
- Baiting AI: Deceptive Adversary Against AI-Protected Industrial Infrastructures [13.191612618343099]
This paper explores a new cyber-attack vector targeting Industrial Control Systems (ICS), particularly focusing on water treatment facilities.<n>Our research reveals the robustness of this attack strategy, shedding light on the potential for DRL models to be manipulated for adversarial purposes.
arXiv Detail & Related papers (2026-01-13T12:12:47Z) - Can Artificial Intelligence Trade the Stock Market? [0.0]
The paper explores the use of Deep Reinforcement Learning (DRL) in stock market trading, focusing on two algorithms: Double Deep Q-Network (DDQN) and Proximal Policy Optimization (PPO)<n>It evaluates these algorithms across three currency pairs, the S&P 500 index and Bitcoin, on the daily data in the period of 2019-2023.<n>The results demonstrate DRL's effectiveness in trading and its ability to manage risk by strategically avoiding trades in unfavorable conditions.
arXiv Detail & Related papers (2025-06-05T05:59:10Z) - Risk-averse policies for natural gas futures trading using distributional reinforcement learning [0.0]
This paper studies the effectiveness of three distributional RL algorithms for natural gas futures trading.
To the best of our knowledge, these algorithms have never been applied in a trading context.
We show that training C51 and IQN to maximize CVaR produces risk-sensitive policies with adjustable risk aversion.
arXiv Detail & Related papers (2025-01-08T11:11:25Z) - Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading
Agents [0.8339831319589133]
"gray-box" approach for attacking a Deep RL-based trading agent is possible by trading in the same stock market, with no extra access to the trading agent.
An adversary agent uses a hybrid Deep Neural Network as its policy consisting of Convolutional layers and fully-connected layers.
arXiv Detail & Related papers (2023-09-26T02:07:26Z) - Attacking and Defending Deep Reinforcement Learning Policies [3.6985039575807246]
We study robustness of DRL policies to adversarial attacks from the perspective of robust optimization.
We propose a greedy attack algorithm, which tries to minimize the expected return of the policy without interacting with the environment, and a defense algorithm, which performs adversarial training in a max-min form.
arXiv Detail & Related papers (2022-05-16T12:47:54Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Query-based Targeted Action-Space Adversarial Policies on Deep
Reinforcement Learning Agents [23.580682320064714]
This work investigates targeted attacks in the action-space domain, also commonly known as actuation attacks in CPS literature.
We show that a query-based black-box attack model that generates optimal perturbations with respect to an adversarial goal can be formulated as another reinforcement learning problem.
Experimental results showed that adversarial policies that only observe the nominal policy's output generate stronger attacks than adversarial policies that observe the nominal policy's input and output.
arXiv Detail & Related papers (2020-11-13T20:25:48Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z) - An Application of Deep Reinforcement Learning to Algorithmic Trading [4.523089386111081]
This scientific research paper presents an innovative approach based on deep reinforcement learning (DRL) to solve the algorithmic trading problem.
It proposes a novel DRL trading strategy so as to maximise the resulting Sharpe ratio performance indicator on a broad range of stock markets.
The training of the resulting reinforcement learning (RL) agent is entirely based on the generation of artificial trajectories from a limited set of stock market historical data.
arXiv Detail & Related papers (2020-04-07T14:57:23Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.