Recent Advances in Reinforcement Learning in Finance
- URL: http://arxiv.org/abs/2112.04553v1
- Date: Wed, 8 Dec 2021 19:55:26 GMT
- Title: Recent Advances in Reinforcement Learning in Finance
- Authors: Ben Hambly, Renyuan Xu and Huining Yang
- Abstract summary: The rapid changes in the finance industry due to the increasing amount of data have revolutionized techniques on data processing and data analysis.
New developments from reinforcement learning (RL) are able to make full use of the large amount of financial data.
- Score: 3.0079490585515343
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid changes in the finance industry due to the increasing amount of
data have revolutionized the techniques on data processing and data analysis
and brought new theoretical and computational challenges. In contrast to
classical stochastic control theory and other analytical approaches for solving
financial decision-making problems that heavily reply on model assumptions, new
developments from reinforcement learning (RL) are able to make full use of the
large amount of financial data with fewer model assumptions and to improve
decisions in complex financial environments. This survey paper aims to review
the recent developments and use of RL approaches in finance. We give an
introduction to Markov decision processes, which is the setting for many of the
commonly used RL approaches. Various algorithms are then introduced with a
focus on value and policy based methods that do not require any model
assumptions. Connections are made with neural networks to extend the framework
to encompass deep RL algorithms. Our survey concludes by discussing the
application of these RL algorithms in a variety of decision-making problems in
finance, including optimal execution, portfolio optimization, option pricing
and hedging, market making, smart order routing, and robo-advising.
Related papers
- The Evolution of Reinforcement Learning in Quantitative Finance [3.8535927070486697]
Reinforcement Learning (RL) has experienced significant advancement over the past decade, prompting a growing interest in applications within finance.
This survey critically evaluates 167 publications, exploring diverse RL applications and frameworks in finance.
Financial markets, marked by their complexity, multi-agent nature, information asymmetry, and inherent randomness, serve as an intriguing test-bed for RL.
arXiv Detail & Related papers (2024-08-20T15:15:10Z) - Reinforcement Learning in High-frequency Market Making [7.740207107300432]
This paper establishes a new and comprehensive theoretical analysis for the application of reinforcement learning (RL) in high-frequency market making.
We bridge the modern RL theory and the continuous-time statistical models in high-frequency financial economics.
arXiv Detail & Related papers (2024-07-14T22:07:48Z) - Stochastic Q-learning for Large Discrete Action Spaces [79.1700188160944]
In complex environments with discrete action spaces, effective decision-making is critical in reinforcement learning (RL)
We present value-based RL approaches which, as opposed to optimizing over the entire set of $n$ actions, only consider a variable set of actions, possibly as small as $mathcalO(log(n)$)$.
The presented value-based RL methods include, among others, Q-learning, StochDQN, StochDDQN, all of which integrate this approach for both value-function updates and action selection.
arXiv Detail & Related papers (2024-05-16T17:58:44Z) - A machine learning workflow to address credit default prediction [0.44943951389724796]
Credit default prediction (CDP) plays a crucial role in assessing the creditworthiness of individuals and businesses.
We propose a workflow-based approach to improve CDP, which refers to the task of assessing the probability that a borrower will default on his or her credit obligations.
arXiv Detail & Related papers (2024-03-06T15:30:41Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - A Survey of Contextual Optimization Methods for Decision Making under
Uncertainty [47.73071218563257]
This review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations.
We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks.
arXiv Detail & Related papers (2023-06-17T15:21:02Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Deep Reinforcement Learning Approach for Trading Automation in The Stock
Market [0.0]
This paper presents a model to generate profitable trades in the stock market using Deep Reinforcement Learning (DRL) algorithms.
We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market.
We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set.
arXiv Detail & Related papers (2022-07-05T11:34:29Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - Bridging the gap between Markowitz planning and deep reinforcement
learning [0.0]
This paper shows how Deep Reinforcement Learning techniques can shed new lights on portfolio allocation.
The advantages are numerous: (i) DRL maps directly market conditions to actions by design and hence should adapt to changing environment, (ii) DRL does not rely on any traditional financial risk assumptions like that risk is represented by variance, (iii) DRL can incorporate additional data and be a multi inputs method as opposed to more traditional optimization methods.
arXiv Detail & Related papers (2020-09-30T04:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.