FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading
in Quantitative Finance
- URL: http://arxiv.org/abs/2011.09607v2
- Date: Wed, 2 Mar 2022 14:28:11 GMT
- Title: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading
in Quantitative Finance
- Authors: Xiao-Yang Liu, Hongyang Yang, Qian Chen, Runjia Zhang, Liuqing Yang,
Bowen Xiao, Christina Dan Wang
- Abstract summary: We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance.
FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300.
It incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion.
- Score: 20.43261517036651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep reinforcement learning (DRL) has been recognized as an effective
approach in quantitative finance, getting hands-on experiences is attractive to
beginners. However, to train a practical DRL trading agent that decides where
to trade, at what price, and what quantity involves error-prone and arduous
development and debugging. In this paper, we introduce a DRL library FinRL that
facilitates beginners to expose themselves to quantitative finance and to
develop their own stock trading strategies. Along with easily-reproducible
tutorials, FinRL library allows users to streamline their own developments and
to compare with existing schemes easily. Within FinRL, virtual environments are
configured with stock market datasets, trading agents are trained with neural
networks, and extensive backtesting is analyzed via trading performance.
Moreover, it incorporates important trading constraints such as transaction
cost, market liquidity and the investor's degree of risk-aversion. FinRL is
featured with completeness, hands-on tutorial and reproducibility that favors
beginners: (i) at multiple levels of time granularity, FinRL simulates trading
environments across various stock markets, including NASDAQ-100, DJIA, S&P 500,
HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular
structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN,
DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard
evaluation baselines to alleviate the debugging workloads and promote the
reproducibility, and (iii) being highly extendable, FinRL reserves a complete
set of user-import interfaces. Furthermore, we incorporated three application
demonstrations, namely single stock trading, multiple stock trading, and
portfolio allocation. The FinRL library will be available on Github at link
https://github.com/AI4Finance-LLC/FinRL-Library.
Related papers
- AlphaFin: Benchmarking Financial Analysis with Retrieval-Augmented Stock-Chain Framework [48.3060010653088]
We release AlphaFin datasets, combining traditional research datasets, real-time financial data, and handwritten chain-of-thought (CoT) data.
We then use AlphaFin datasets to benchmark a state-of-the-art method, called Stock-Chain, for effectively tackling the financial analysis task.
arXiv Detail & Related papers (2024-03-19T09:45:33Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - FinBen: A Holistic Financial Benchmark for Large Language Models [75.09474986283394]
FinBen is the first extensive open-source evaluation benchmark, including 36 datasets spanning 24 financial tasks.
FinBen offers several key innovations: a broader range of tasks and datasets, the first evaluation of stock trading, novel agent and Retrieval-Augmented Generation (RAG) evaluation, and three novel open-source evaluation datasets for text summarization, question answering, and stock trading.
arXiv Detail & Related papers (2024-02-20T02:16:16Z) - Unleashing the Power of Pre-trained Language Models for Offline
Reinforcement Learning [54.682106515794864]
offline reinforcement learning (RL) aims to find a near-optimal policy using pre-collected datasets.
This paper introduces $textbfLanguage Models for $textbfMo$tion Control ($textbfLaMo$), a general framework based on Decision Transformers to use pre-trained Language Models (LMs) for offline RL.
Empirical results indicate $textbfLaMo$ achieves state-of-the-art performance in sparse-reward tasks.
arXiv Detail & Related papers (2023-10-31T16:24:17Z) - Combining Deep Learning on Order Books with Reinforcement Learning for
Profitable Trading [0.0]
This work focuses on forecasting returns across multiple horizons using order flow and training three temporal-difference imbalance learning models for five financial instruments.
The results prove potential but require further minimal modifications for consistently profitable trading to fully handle retail trading costs, slippage, and spread fluctuation.
arXiv Detail & Related papers (2023-10-24T15:58:58Z) - Dynamic Datasets and Market Environments for Financial Reinforcement
Learning [68.11692837240756]
FinRL-Meta is a library that processes dynamic datasets from real-world markets into gym-style market environments.
We provide examples and reproduce popular research papers as stepping stones for users to design new trading strategies.
We also deploy the library on cloud platforms so that users can visualize their own results and assess the relative performance.
arXiv Detail & Related papers (2023-04-25T22:17:31Z) - Astock: A New Dataset and Automated Stock Trading based on
Stock-specific News Analyzing Model [21.05128751957895]
We build a platform to study the NLP-aided stock auto-trading algorithms systematically.
We provide financial news for each specific stock.
We provide various stock factors for each stock.
We evaluate performance from more financial-relevant metrics.
arXiv Detail & Related papers (2022-06-14T05:55:23Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - FinRL: Deep Reinforcement Learning Framework to Automate Trading in
Quantitative Finance [22.808509136431645]
Deep reinforcement learning (DRL) has been envisioned to have a competitive edge in quantitative finance.
In this paper, we present the first open-source framework textitFinRL as a full pipeline to help quantitative traders overcome the steep learning curve.
arXiv Detail & Related papers (2021-11-07T00:34:32Z) - Deep Reinforcement Learning in Quantitative Algorithmic Trading: A
Review [0.0]
Deep Reinforcement Learning agents proved to be to a force to be reckon with in many games like Chess and Go.
This paper reviews the progress made so far with deep reinforcement learning in the subdomain of AI in finance.
We conclude that DRL in stock trading has showed huge applicability potential rivalling professional traders under strong assumptions.
arXiv Detail & Related papers (2021-05-31T22:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.