Reinforcement Learning for Systematic FX Trading
- URL: http://arxiv.org/abs/2110.04745v1
- Date: Sun, 10 Oct 2021 09:44:29 GMT
- Title: Reinforcement Learning for Systematic FX Trading
- Authors: Gabriel Borrageiro and Nick Firoozye and Paolo Barucca
- Abstract summary: We conduct a detailed experiment on major cash pairs, accurately accounting for transaction and funding costs.
These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner.
This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We conduct a detailed experiment on major cash fx pairs, accurately
accounting for transaction and funding costs. These sources of profit and loss,
including the price trends that occur in the currency markets, are made
available to our recurrent reinforcement learner via a quadratic utility, which
learns to target a position directly. We improve upon earlier work, by casting
the problem of learning to target a risk position, in an online learning
context. This online learning occurs sequentially in time, but also in the form
of transfer learning. We transfer the output of radial basis function hidden
processing units, whose means, covariances and overall size are determined by
Gaussian mixture models, to the recurrent reinforcement learner and baseline
momentum trader. Thus the intrinsic nature of the feature space is learnt and
made available to the upstream models. The recurrent reinforcement learning
trader achieves an annualised portfolio information ratio of 0.52 with compound
return of 9.3%, net of execution and funding cost, over a 7 year test set. This
is despite forcing the model to trade at the close of the trading day 5pm EST,
when trading costs are statistically the most expensive. These results are
comparable with the momentum baseline trader, reflecting the low interest
differential environment since the the 2008 financial crisis, and very obvious
currency trends since then. The recurrent reinforcement learner does
nevertheless maintain an important advantage, in that the model's weights can
be adapted to reflect the different sources of profit and loss variation. This
is demonstrated visually by a USDRUB trading agent, who learns to target
different positions, that reflect trading in the absence or presence of cost.
Related papers
- When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning [54.51890573369637]
We propose a bargaining-based feature trading approach in Vertical Federated Learning (VFL) to encourage economically efficient transactions.
Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties.
arXiv Detail & Related papers (2024-02-23T10:21:07Z) - An Auction-based Marketplace for Model Trading in Federated Learning [54.79736037670377]
Federated learning (FL) is increasingly recognized for its efficacy in training models using locally distributed data.
We frame FL as a marketplace of models, where clients act as both buyers and sellers.
We propose an auction-based solution to ensure proper pricing based on performance gain.
arXiv Detail & Related papers (2024-02-02T07:25:53Z) - Combining Deep Learning on Order Books with Reinforcement Learning for
Profitable Trading [0.0]
This work focuses on forecasting returns across multiple horizons using order flow and training three temporal-difference imbalance learning models for five financial instruments.
The results prove potential but require further minimal modifications for consistently profitable trading to fully handle retail trading costs, slippage, and spread fluctuation.
arXiv Detail & Related papers (2023-10-24T15:58:58Z) - NoxTrader: LSTM-Based Stock Return Momentum Prediction for Quantitative
Trading [0.0]
NoxTrader is a sophisticated system designed for portfolio construction and trading execution.
The underlying learning process of NoxTrader is rooted in the assimilation of valuable insights derived from historical trading data.
Our rigorous feature engineering and careful selection of prediction targets enable us to generate prediction data with an impressive correlation range between 0.65 and 0.75.
arXiv Detail & Related papers (2023-10-01T17:53:23Z) - Data Cross-Segmentation for Improved Generalization in Reinforcement
Learning Based Algorithmic Trading [5.75899596101548]
We propose a Reinforcement Learning (RL) algorithm that trades based on signals from a learned predictive model.
We test our algorithm on 20+ years of equity data from Bursa Malaysia.
arXiv Detail & Related papers (2023-07-18T16:00:02Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Learning to Liquidate Forex: Optimal Stopping via Adaptive Top-K
Regression [19.942711817396734]
We consider learning a trading agent acting on behalf of a firm earning revenue in a foreign currency (FC) and incurring expenses in the home currency (HC)
The goal of the agent is to maximize the expected HC at the end of the trading episode by deciding to hold or sell the FC at each time step in the trading episode.
We propose a novel supervised learning approach that learns to forecast the top-K future FX rates instead of forecasting all the future FX rates.
arXiv Detail & Related papers (2022-02-25T09:33:10Z) - Deep Learning Statistical Arbitrage [0.0]
We propose a unifying conceptual framework for statistical arbitrage and develop a novel deep learning solution.
We construct arbitrage portfolios of similar assets as residual portfolios from conditional latent asset pricing factors.
We extract the time series signals of these residual portfolios with one of the most powerful machine learning time-series solutions.
arXiv Detail & Related papers (2021-06-08T00:48:25Z) - Taking Over the Stock Market: Adversarial Perturbations Against
Algorithmic Traders [47.32228513808444]
We present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques.
We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points.
arXiv Detail & Related papers (2020-10-19T06:28:05Z) - Precise Tradeoffs in Adversarial Training for Linear Regression [55.764306209771405]
We provide a precise and comprehensive understanding of the role of adversarial training in the context of linear regression with Gaussian features.
We precisely characterize the standard/robust accuracy and the corresponding tradeoff achieved by a contemporary mini-max adversarial training approach.
Our theory for adversarial training algorithms also facilitates the rigorous study of how a variety of factors (size and quality of training data, model overparametrization etc.) affect the tradeoff between these two competing accuracies.
arXiv Detail & Related papers (2020-02-24T19:01:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.