On the Convergence and Optimality of Policy Gradient for Markov Coherent
Risk
- URL: http://arxiv.org/abs/2103.02827v2
- Date: Fri, 5 Mar 2021 20:49:55 GMT
- Title: On the Convergence and Optimality of Policy Gradient for Markov Coherent
Risk
- Authors: Audrey Huang, Liu Leqi, Zachary C. Lipton, Kamyar Azizzadenesheli
- Abstract summary: We present a tight upper bound on the suboptimality of the learned policy, characterizing its dependence on the nonlinearity of the objective and the degree of risk aversion.
We propose a practical implementation of PG that uses state distribution reweighting to overcome previous limitations.
- Score: 32.97618081988295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to model risk aversion in reinforcement learning, an emerging line
of research adapts familiar algorithms to optimize coherent risk functionals, a
class that includes conditional value-at-risk (CVaR). Because optimizing the
coherent risk is difficult in Markov decision processes, recent work tends to
focus on the Markov coherent risk (MCR), a time-consistent surrogate. While,
policy gradient (PG) updates have been derived for this objective, it remains
unclear (i) whether PG finds a global optimum for MCR; (ii) how to estimate the
gradient in a tractable manner. In this paper, we demonstrate that, in general,
MCR objectives (unlike the expected return) are not gradient dominated and that
stationary points are not, in general, guaranteed to be globally optimal.
Moreover, we present a tight upper bound on the suboptimality of the learned
policy, characterizing its dependence on the nonlinearity of the objective and
the degree of risk aversion. Addressing (ii), we propose a practical
implementation of PG that uses state distribution reweighting to overcome
previous limitations. Through experiments, we demonstrate that when the
optimality gap is small, PG can learn risk-sensitive policies. However, we find
that instances with large suboptimality gaps are abundant and easy to
construct, outlining an important challenge for future research.
Related papers
- Stationary Policies are Optimal in Risk-averse Total-reward MDPs with EVaR [12.719528972742394]
We show that the risk-averse em total reward criterion can be optimized by a stationary policy.
Our results indicate that the total reward criterion may be preferable to the discounted criterion in a broad range of risk-averse reinforcement learning domains.
arXiv Detail & Related papers (2024-08-30T13:33:18Z) - Robust Risk-Sensitive Reinforcement Learning with Conditional Value-at-Risk [23.63388546004777]
We analyze the robustness of CVaR-based risk-sensitive RL under Robust Markov Decision Processes.
Motivated by the existence of decision-dependent uncertainty in real-world problems, we study problems with state-action-dependent ambiguity sets.
arXiv Detail & Related papers (2024-05-02T20:28:49Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Provably Efficient Iterated CVaR Reinforcement Learning with Function
Approximation and Human Feedback [57.6775169085215]
Risk-sensitive reinforcement learning aims to optimize policies that balance the expected reward and risk.
We present a novel framework that employs an Iterated Conditional Value-at-Risk (CVaR) objective under both linear and general function approximations.
We propose provably sample-efficient algorithms for this Iterated CVaR RL and provide rigorous theoretical analysis.
arXiv Detail & Related papers (2023-07-06T08:14:54Z) - On the Global Convergence of Risk-Averse Policy Gradient Methods with Expected Conditional Risk Measures [17.668631383216233]
Risk-sensitive reinforcement learning (RL) has become a popular tool for controlling the risk of uncertain outcomes.
It remains unclear if Policy Gradient (PG) methods enjoy the same global convergence guarantees as in the risk-neutral case.
arXiv Detail & Related papers (2023-01-26T04:35:28Z) - RASR: Risk-Averse Soft-Robust MDPs with EVaR and Entropic Risk [28.811725782388688]
We propose and analyze a new framework to jointly model the risk associated with uncertainties in finite-horizon and discounted infinite-horizon MDPs.
We show that when the risk-aversion is defined using either EVaR or the entropic risk, the optimal policy in RASR can be computed efficiently using a new dynamic program formulation with a time-dependent risk level.
arXiv Detail & Related papers (2022-09-09T00:34:58Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Policy Gradient Bayesian Robust Optimization for Imitation Learning [49.881386773269746]
We derive a novel policy gradient-style robust optimization approach, PG-BROIL, to balance expected performance and risk.
Results suggest PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse.
arXiv Detail & Related papers (2021-06-11T16:49:15Z) - Policy Mirror Descent for Regularized Reinforcement Learning: A
Generalized Framework with Linear Convergence [60.20076757208645]
This paper proposes a general policy mirror descent (GPMD) algorithm for solving regularized RL.
We demonstrate that our algorithm converges linearly over an entire range learning rates, in a dimension-free fashion, to the global solution.
arXiv Detail & Related papers (2021-05-24T02:21:34Z) - Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds
Globally Optimal Policy [95.98698822755227]
We make the first attempt to study risk-sensitive deep reinforcement learning under the average reward setting with the variance risk criteria.
We propose an actor-critic algorithm that iteratively and efficiently updates the policy, the Lagrange multiplier, and the Fenchel dual variable.
arXiv Detail & Related papers (2020-12-28T05:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.