Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff
in Regret
- URL: http://arxiv.org/abs/2006.13827v1
- Date: Mon, 22 Jun 2020 19:28:26 GMT
- Title: Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff
in Regret
- Authors: Yingjie Fei, Zhuoran Yang, Yudong Chen, Zhaoran Wang, Qiaomin Xie
- Abstract summary: We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels.
We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ)
We prove that RSVI attains an $tildeObig(lambda(|beta| H2) cdot sqrtH3 S2AT big)$ regret, while RSQ attains an $tildeObig(lambda
- Score: 115.85354306623368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study risk-sensitive reinforcement learning in episodic Markov decision
processes with unknown transition kernels, where the goal is to optimize the
total reward under the risk measure of exponential utility. We propose two
provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI)
and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of
risk-sensitive optimism in the face of uncertainty, which adapts to both
risk-seeking and risk-averse modes of exploration. We prove that RSVI attains
an $\tilde{O}\big(\lambda(|\beta| H^2) \cdot \sqrt{H^{3} S^{2}AT} \big)$
regret, while RSQ attains an $\tilde{O}\big(\lambda(|\beta| H^2) \cdot
\sqrt{H^{4} SAT} \big)$ regret, where $\lambda(u) = (e^{3u}-1)/u$ for $u>0$. In
the above, $\beta$ is the risk parameter of the exponential utility function,
$S$ the number of states, $A$ the number of actions, $T$ the total number of
timesteps, and $H$ the episode length. On the flip side, we establish a regret
lower bound showing that the exponential dependence on $|\beta|$ and $H$ is
unavoidable for any algorithm with an $\tilde{O}(\sqrt{T})$ regret (even when
the risk objective is on the same scale as the original reward), thus
certifying the near-optimality of the proposed algorithms. Our results
demonstrate that incorporating risk awareness into reinforcement learning
necessitates an exponential cost in $|\beta|$ and $H$, which quantifies the
fundamental tradeoff between risk sensitivity (related to aleatoric
uncertainty) and sample efficiency (related to epistemic uncertainty). To the
best of our knowledge, this is the first regret analysis of risk-sensitive
reinforcement learning with the exponential utility.
Related papers
- Improved Regret Bound for Safe Reinforcement Learning via Tighter Cost Pessimism and Reward Optimism [1.4999444543328293]
We propose a model-based algorithm based on novel cost and reward function estimators.
Our algorithm achieves a regret upper bound of $widetildemathcalO((bar C - bar C_b)-1H2.5 SsqrtAK)$ where $bar C$ is the cost budget for an episode.
arXiv Detail & Related papers (2024-10-14T04:51:06Z) - Uncertainty-Aware Reward-Free Exploration with General Function Approximation [69.27868448449755]
In this paper, we propose a reward-free reinforcement learning algorithm called alg.
The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment.
Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.
arXiv Detail & Related papers (2024-06-24T01:37:18Z) - Regret Bounds for Episodic Risk-Sensitive Linear Quadratic Regulator [5.445357652101423]
Risk-sensitive linear quadratic regulator is one of the most fundamental problems in risk-sensitive optimal control.
We propose a simple least-squares greedy algorithm and show that it achieves $widetildemathcalO(log N)$ regret.
This is the first set of regret bounds for episodic risk-sensitive linear quadratic regulator.
arXiv Detail & Related papers (2024-06-08T06:06:20Z) - Risk Estimation in a Markov Cost Process: Lower and Upper Bounds [3.1484174280822845]
We tackle the problem of estimating risk measures of the infinite-horizon discounted cost within a Markov cost process.
The risk measures we study include variance, Value-at-Risk (VaR), and Conditional Value-at-Risk (CVaR)
arXiv Detail & Related papers (2023-10-17T16:35:39Z) - Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR [58.40575099910538]
We study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance $tau$.
We show the minimax CVaR regret rate is $Omega(sqrttau-1AK)$, where $A$ is the number of actions and $K$ is the number of episodes.
We show that our algorithm achieves the optimal regret of $widetilde O(tau-1sqrtSAK)$ under a continuity assumption and in general attains a near
arXiv Detail & Related papers (2023-02-07T02:22:31Z) - Bridging Distributional and Risk-sensitive Reinforcement Learning with
Provable Regret Bounds [24.571530193140916]
We consider finite episodic Markov decision processes whose objective is the entropic risk measure (EntRM) of return.
We propose two novel DRL algorithms that implement optimism through two different schemes, including a model-free one and a model-based one.
We prove that they both attain $tildemathcalO(fracexp(|beta| H)-1|beta|HsqrtS2AK)$ regret upper bound, where $S$, $A$, $K$, and $H$ represent the number
arXiv Detail & Related papers (2022-10-25T14:30:48Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.