A unified algorithm framework for mean-variance optimization in
discounted Markov decision processes
- URL: http://arxiv.org/abs/2201.05737v1
- Date: Sat, 15 Jan 2022 02:19:56 GMT
- Title: A unified algorithm framework for mean-variance optimization in
discounted Markov decision processes
- Authors: Shuai Ma, Xiaoteng Ma, and Li Xia
- Abstract summary: This paper studies the risk-averse mean-variance optimization in infinite-horizon discounted Markov decision processes (MDPs)
We introduce a pseudo mean to transform the untreatable MDP to a standard one with a redefined reward function in standard form.
We propose a unified algorithm framework with a bilevel optimization structure for the discounted mean-variance optimization.
- Score: 7.510742715895749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the risk-averse mean-variance optimization in
infinite-horizon discounted Markov decision processes (MDPs). The involved
variance metric concerns reward variability during the whole process, and
future deviations are discounted to their present values. This discounted
mean-variance optimization yields a reward function dependent on a discounted
mean, and this dependency renders traditional dynamic programming methods
inapplicable since it suppresses a crucial property -- time consistency. To
deal with this unorthodox problem, we introduce a pseudo mean to transform the
untreatable MDP to a standard one with a redefined reward function in standard
form and derive a discounted mean-variance performance difference formula. With
the pseudo mean, we propose a unified algorithm framework with a bilevel
optimization structure for the discounted mean-variance optimization. The
framework unifies a variety of algorithms for several variance-related problems
including, but not limited to, risk-averse variance and mean-variance
optimizations in discounted and average MDPs. Furthermore, the convergence
analyses missing from the literature can be complemented with the proposed
framework as well. Taking the value iteration as an example, we develop a
discounted mean-variance value iteration algorithm and prove its convergence to
a local optimum with the aid of a Bellman local-optimality equation. Finally,
we conduct a numerical experiment on portfolio management to validate the
proposed algorithm.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls [18.047245099229325]
We propose a distributionally robust approach that uses an ambiguity set by the intersection of two Wasserstein balls.
We demonstrate the strong empirical performance of our proposed models.
arXiv Detail & Related papers (2024-06-04T15:46:41Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Sharp Variance-Dependent Bounds in Reinforcement Learning: Best of Both
Worlds in Stochastic and Deterministic Environments [48.96971760679639]
We study variance-dependent regret bounds for Markov decision processes (MDPs)
We propose two new environment norms to characterize the fine-grained variance properties of the environment.
For model-based methods, we design a variant of the MVP algorithm.
In particular, this bound is simultaneously minimax optimal for both and deterministic MDPs.
arXiv Detail & Related papers (2023-01-31T06:54:06Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Risk-Sensitive Markov Decision Processes with Combined Metrics of Mean
and Variance [3.062772835338966]
This paper investigates the optimization problem of an infinite stage discrete time Markov decision process (MDP) with a long-run average metric.
A performance difference formula is derived and it can quantify the difference of the mean-variance combined metrics of MDPs under any two different policies.
A necessary condition of the optimal policy and the optimality of deterministic policies are derived.
arXiv Detail & Related papers (2020-08-09T10:35:35Z) - Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis [102.29671176698373]
We address the problem of policy evaluation in discounted decision processes, and provide Markov-dependent guarantees on the $ell_infty$error under a generative model.
We establish both and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms.
arXiv Detail & Related papers (2020-03-16T17:15:28Z) - Distributionally Robust Bayesian Optimization [121.71766171427433]
We present a novel distributionally robust Bayesian optimization algorithm (DRBO) for zeroth-order, noisy optimization.
Our algorithm provably obtains sub-linear robust regret in various settings.
We demonstrate the robust performance of our method on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2020-02-20T22:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.