Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning
- URL: http://arxiv.org/abs/2312.15385v1
- Date: Sun, 24 Dec 2023 02:08:49 GMT
- Title: Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning
- Authors: Xiangyu Cui, Xun Li, Yun Shi and Si Zhao
- Abstract summary: We use entropy to measure the cost of exploration to derive the optimal investment strategy.
We design the corresponding reinforcement learning algorithm.
Our model exhibits better applicability when analyzing real-world data than the continuous-time model.
- Score: 5.8184275610981615
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper studies a discrete-time mean-variance model based on reinforcement
learning. Compared with its continuous-time counterpart in \cite{zhou2020mv},
the discrete-time model makes more general assumptions about the asset's return
distribution. Using entropy to measure the cost of exploration, we derive the
optimal investment strategy, whose density function is also Gaussian type.
Additionally, we design the corresponding reinforcement learning algorithm.
Both simulation experiments and empirical analysis indicate that our
discrete-time model exhibits better applicability when analyzing real-world
data than the continuous-time model.
Related papers
- Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Reward-Directed Score-Based Diffusion Models via q-Learning [8.725446812770791]
We propose a new reinforcement learning (RL) formulation for training continuous-time score-based diffusion models for generative AI.
Our formulation does not involve any pretrained model for the unknown score functions of the noise-perturbed data distributions.
arXiv Detail & Related papers (2024-09-07T13:55:45Z) - Efficient Exploration in Continuous-time Model-based Reinforcement
Learning [37.14026153342745]
Reinforcement learning algorithms typically consider discrete-time dynamics, even though the underlying systems are often continuous in time.
We introduce a model-based reinforcement learning algorithm that represents continuous-time dynamics.
arXiv Detail & Related papers (2023-10-30T15:04:40Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z) - Time Adaptive Gaussian Model [0.913755431537592]
Our model is a generalization of state-of-the-art methods for the inference of temporal graphical models.
It performs pattern recognition by clustering data points in time; and, it finds probabilistic (and possibly causal) relationships among the observed variables.
arXiv Detail & Related papers (2021-02-02T00:28:14Z) - Model Embedding Model-Based Reinforcement Learning [4.566180616886624]
Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL)
Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias.
We propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning.
arXiv Detail & Related papers (2020-06-16T15:10:28Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.