Is Risk-Sensitive Reinforcement Learning Properly Resolved?
- URL: http://arxiv.org/abs/2307.00547v1
- Date: Sun, 2 Jul 2023 11:47:21 GMT
- Title: Is Risk-Sensitive Reinforcement Learning Properly Resolved?
- Authors: Ruiwen Zhou, Minghuan Liu, Kan Ren, Xufang Luo, Weinan Zhang,
Dongsheng Li
- Abstract summary: We propose a novel algorithm, namely Trajectory Q-Learning (TQL), for RSRL problems with provable convergence to the optimal policy.
Based on our new learning architecture, we are free to introduce a general and practical implementation for different risk measures to learn disparate risk-sensitive policies.
- Score: 32.42976780682353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the nature of risk management in learning applicable policies,
risk-sensitive reinforcement learning (RSRL) has been realized as an important
direction. RSRL is usually achieved by learning risk-sensitive objectives
characterized by various risk measures, under the framework of distributional
reinforcement learning. However, it remains unclear if the distributional
Bellman operator properly optimizes the RSRL objective in the sense of risk
measures. In this paper, we prove that the existing RSRL methods do not achieve
unbiased optimization and can not guarantee optimality or even improvements
regarding risk measures over accumulated return distributions. To remedy this
issue, we further propose a novel algorithm, namely Trajectory Q-Learning
(TQL), for RSRL problems with provable convergence to the optimal policy. Based
on our new learning architecture, we are free to introduce a general and
practical implementation for different risk measures to learn disparate
risk-sensitive policies. In the experiments, we verify the learnability of our
algorithm and show how our method effectively achieves better performances
toward risk-sensitive objectives.
Related papers
- Pessimism Meets Risk: Risk-Sensitive Offline Reinforcement Learning [19.292214425524303]
We study risk-sensitive reinforcement learning (RL), a crucial field due to its ability to enhance decision-making in scenarios where it is essential to manage uncertainty and minimize potential adverse outcomes.
Our work focuses on applying the entropic risk measure to RL problems.
We center on the linear Markov Decision Process (MDP) setting, a well-regarded theoretical framework that has yet to be examined from a risk-sensitive standpoint.
arXiv Detail & Related papers (2024-07-10T13:09:52Z) - Policy Gradient Methods for Risk-Sensitive Distributional Reinforcement Learning with Provable Convergence [15.720824593964027]
Risk-sensitive reinforcement learning (RL) is crucial for maintaining reliable performance in high-stakes applications.
This paper introduces a policy gradient method for risk-sensitive DRL with general coherent risk measures.
We also design a categorical distributional policy gradient algorithm (CDPG) based on categorical distributional policy evaluation and trajectory gradient estimation.
arXiv Detail & Related papers (2024-05-23T16:16:58Z) - Risk-Sensitive RL with Optimized Certainty Equivalents via Reduction to
Standard RL [48.1726560631463]
We study Risk-Sensitive Reinforcement Learning with the Optimized Certainty Equivalent (OCE) risk.
We propose two general meta-algorithms via reductions to standard RL.
We show that it learns the optimal risk-sensitive policy while prior algorithms provably fail.
arXiv Detail & Related papers (2024-03-10T21:45:12Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - Distributional Reinforcement Learning with Online Risk-awareness
Adaption [5.363478475460403]
We introduce a novel framework, Distributional RL with Online Risk Adaption (DRL-ORA)
DRL-ORA dynamically selects the epistemic risk levels via solving a total variation minimization problem online.
We show multiple classes of tasks where DRL-ORA outperforms existing methods that rely on either a fixed risk level or manually predetermined risk level.
arXiv Detail & Related papers (2023-10-08T14:32:23Z) - Provably Efficient Iterated CVaR Reinforcement Learning with Function
Approximation and Human Feedback [57.6775169085215]
Risk-sensitive reinforcement learning aims to optimize policies that balance the expected reward and risk.
We present a novel framework that employs an Iterated Conditional Value-at-Risk (CVaR) objective under both linear and general function approximations.
We propose provably sample-efficient algorithms for this Iterated CVaR RL and provide rigorous theoretical analysis.
arXiv Detail & Related papers (2023-07-06T08:14:54Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.