Risk-Averse Reinforcement Learning via Dynamic Time-Consistent Risk
Measures
- URL: http://arxiv.org/abs/2301.05981v1
- Date: Sat, 14 Jan 2023 21:43:18 GMT
- Title: Risk-Averse Reinforcement Learning via Dynamic Time-Consistent Risk
Measures
- Authors: Xian Yu, Siqian Shen
- Abstract summary: In this paper, we consider the problem of maximizing dynamic risk of a sequence of rewards in Markov Decision Processes (MDPs)
Using a convex combination of expectation and conditional value-at-risk (CVaR) as a special one-step conditional risk measure, we reformulate the risk-averse MDP as a risk-neutral counterpart with augmented action space and manipulation on the immediate rewards.
Our numerical studies show that the risk-averse setting can reduce the variance and enhance robustness of the results.
- Score: 10.221369785560785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional reinforcement learning (RL) aims to maximize the expected total
reward, while the risk of uncertain outcomes needs to be controlled to ensure
reliable performance in a risk-averse setting. In this paper, we consider the
problem of maximizing dynamic risk of a sequence of rewards in infinite-horizon
Markov Decision Processes (MDPs). We adapt the Expected Conditional Risk
Measures (ECRMs) to the infinite-horizon risk-averse MDP and prove its time
consistency. Using a convex combination of expectation and conditional
value-at-risk (CVaR) as a special one-step conditional risk measure, we
reformulate the risk-averse MDP as a risk-neutral counterpart with augmented
action space and manipulation on the immediate rewards. We further prove that
the related Bellman operator is a contraction mapping, which guarantees the
convergence of any value-based RL algorithms. Accordingly, we develop a
risk-averse deep Q-learning framework, and our numerical studies based on two
simple MDPs show that the risk-averse setting can reduce the variance and
enhance robustness of the results.
Related papers
- Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz
Dynamic Risk Measures [23.46659319363579]
We present two model-based algorithms applied to emphLipschitz dynamic risk measures.
Notably, our upper bounds demonstrate optimal dependencies on the number of actions and episodes.
arXiv Detail & Related papers (2023-06-04T16:24:19Z) - One Risk to Rule Them All: A Risk-Sensitive Perspective on Model-Based
Offline Reinforcement Learning [25.218430053391884]
We propose risk-sensitivity as a mechanism to jointly address both of these issues.
Risk-aversion to aleatoric uncertainty discourages actions that may result in poor outcomes due to environmentity.
Our experiments show that our algorithm achieves competitive performance on deterministic benchmarks.
arXiv Detail & Related papers (2022-11-30T21:24:11Z) - RASR: Risk-Averse Soft-Robust MDPs with EVaR and Entropic Risk [28.811725782388688]
We propose and analyze a new framework to jointly model the risk associated with uncertainties in finite-horizon and discounted infinite-horizon MDPs.
We show that when the risk-aversion is defined using either EVaR or the entropic risk, the optimal policy in RASR can be computed efficiently using a new dynamic program formulation with a time-dependent risk level.
arXiv Detail & Related papers (2022-09-09T00:34:58Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - SENTINEL: Taming Uncertainty with Ensemble-based Distributional
Reinforcement Learning [6.587644069410234]
We consider risk-sensitive sequential decision-making in model-based reinforcement learning (RL)
We introduce a novel quantification of risk, namely emphcomposite risk
We experimentally verify that SENTINEL-K estimates the return distribution better, and while used with composite risk estimate, demonstrates better risk-sensitive performance than competing RL algorithms.
arXiv Detail & Related papers (2021-02-22T14:45:39Z) - Risk-Constrained Thompson Sampling for CVaR Bandits [82.47796318548306]
We consider a popular risk measure in quantitative finance known as the Conditional Value at Risk (CVaR)
We explore the performance of a Thompson Sampling-based algorithm CVaR-TS under this risk measure.
arXiv Detail & Related papers (2020-11-16T15:53:22Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning [75.17074235764757]
We present a framework for risk-averse control in a discounted infinite horizon MDP.
MVPI enjoys great flexibility in that any policy evaluation method and risk-neutral control method can be dropped in for risk-averse control off the shelf.
This flexibility reduces the gap between risk-neutral control and risk-averse control and is achieved by working on a novel augmented MDP.
arXiv Detail & Related papers (2020-04-22T22:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.