FinRL: Deep Reinforcement Learning Framework to Automate Trading in
Quantitative Finance
- URL: http://arxiv.org/abs/2111.09395v1
- Date: Sun, 7 Nov 2021 00:34:32 GMT
- Title: FinRL: Deep Reinforcement Learning Framework to Automate Trading in
Quantitative Finance
- Authors: Xiao-Yang Liu and Hongyang Yang and Jiechao Gao and Christina Dan Wang
- Abstract summary: Deep reinforcement learning (DRL) has been envisioned to have a competitive edge in quantitative finance.
In this paper, we present the first open-source framework textitFinRL as a full pipeline to help quantitative traders overcome the steep learning curve.
- Score: 22.808509136431645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning (DRL) has been envisioned to have a competitive
edge in quantitative finance. However, there is a steep development curve for
quantitative traders to obtain an agent that automatically positions to win in
the market, namely \textit{to decide where to trade, at what price} and
\textit{what quantity}, due to the error-prone programming and arduous
debugging. In this paper, we present the first open-source framework
\textit{FinRL} as a full pipeline to help quantitative traders overcome the
steep learning curve. FinRL is featured with simplicity, applicability and
extensibility under the key principles, \textit{full-stack framework,
customization, reproducibility} and \textit{hands-on tutoring}.
Embodied as a three-layer architecture with modular structures, FinRL
implements fine-tuned state-of-the-art DRL algorithms and common reward
functions, while alleviating the debugging workloads. Thus, we help users
pipeline the strategy design at a high turnover rate. At multiple levels of
time granularity, FinRL simulates various markets as training environments
using historical data and live trading APIs. Being highly extensible, FinRL
reserves a set of user-import interfaces and incorporates trading constraints
such as market friction, market liquidity and investor's risk-aversion.
Moreover, serving as practitioners' stepping stones, typical trading tasks are
provided as step-by-step tutorials, e.g., stock trading, portfolio allocation,
cryptocurrency trading, etc.
Related papers
- Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning [62.984693936073974]
Value-based reinforcement learning can learn effective policies for a wide range of multi-turn problems.
Current value-based RL methods have proven particularly challenging to scale to the setting of large language models.
We propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning problem.
arXiv Detail & Related papers (2024-11-07T21:36:52Z) - AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework [48.3060010653088]
We release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data.
We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task.
arXiv Detail & Related papers (2024-03-19T09:45:33Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - Combining Deep Learning on Order Books with Reinforcement Learning for
Profitable Trading [0.0]
This work focuses on forecasting returns across multiple horizons using order flow and training three temporal-difference imbalance learning models for five financial instruments.
The results prove potential but require further minimal modifications for consistently profitable trading to fully handle retail trading costs, slippage, and spread fluctuation.
arXiv Detail & Related papers (2023-10-24T15:58:58Z) - Dynamic Datasets and Market Environments for Financial Reinforcement
Learning [68.11692837240756]
FinRL-Meta is a library that processes dynamic datasets from real-world markets into gym-style market environments.
We provide examples and reproduce popular research papers as stepping stones for users to design new trading strategies.
We also deploy the library on cloud platforms so that users can visualize their own results and assess the relative performance.
arXiv Detail & Related papers (2023-04-25T22:17:31Z) - Asynchronous Deep Double Duelling Q-Learning for Trading-Signal
Execution in Limit Order Book Markets [5.202524136984542]
We employ deep reinforcement learning to train an agent to translate a high-frequency trading signal into a trading strategy that places individual limit orders.
Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment.
We find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a benchmark trading strategy having access to the same signal.
arXiv Detail & Related papers (2023-01-20T17:19:18Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Deep Reinforcement Learning Approach for Trading Automation in The Stock
Market [0.0]
This paper presents a model to generate profitable trades in the stock market using Deep Reinforcement Learning (DRL) algorithms.
We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market.
We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set.
arXiv Detail & Related papers (2022-07-05T11:34:29Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - Deep Reinforcement Learning in Quantitative Algorithmic Trading: A
Review [0.0]
Deep Reinforcement Learning agents proved to be to a force to be reckon with in many games like Chess and Go.
This paper reviews the progress made so far with deep reinforcement learning in the subdomain of AI in finance.
We conclude that DRL in stock trading has showed huge applicability potential rivalling professional traders under strong assumptions.
arXiv Detail & Related papers (2021-05-31T22:26:43Z) - FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading
in Quantitative Finance [20.43261517036651]
We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance.
FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300.
It incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion.
arXiv Detail & Related papers (2020-11-19T01:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.