Policy Gradient for Reinforcement Learning with General Utilities
- URL: http://arxiv.org/abs/2210.00991v2
- Date: Tue, 29 Aug 2023 09:23:24 GMT
- Title: Policy Gradient for Reinforcement Learning with General Utilities
- Authors: Navdeep Kumar, Kaixin Wang, Kfir Levy, Shie Mannor
- Abstract summary: In Reinforcement Learning (RL), the goal of agents is to discover an optimal policy that maximizes the expected cumulative rewards.
Many supervised and unsupervised RL problems are not covered in the Linear RL framework.
We derive the policy gradient theorem for RL with general utilities.
- Score: 50.65940899590487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Reinforcement Learning (RL), the goal of agents is to discover an optimal
policy that maximizes the expected cumulative rewards. This objective may also
be viewed as finding a policy that optimizes a linear function of its
state-action occupancy measure, hereafter referred as Linear RL. However, many
supervised and unsupervised RL problems are not covered in the Linear RL
framework, such as apprenticeship learning, pure exploration and variational
intrinsic control, where the objectives are non-linear functions of the
occupancy measures. RL with non-linear utilities looks unwieldy, as methods
like Bellman equation, value iteration, policy gradient, dynamic programming
that had tremendous success in Linear RL, fail to trivially generalize. In this
paper, we derive the policy gradient theorem for RL with general utilities. The
policy gradient theorem proves to be a cornerstone in Linear RL due to its
elegance and ease of implementability. Our policy gradient theorem for RL with
general utilities shares the same elegance and ease of implementability. Based
on the policy gradient theorem derived, we also present a simple sample-based
algorithm. We believe our results will be of interest to the community and
offer inspiration to future works in this generalized setting.
Related papers
- Beyond Expected Returns: A Policy Gradient Algorithm for Cumulative Prospect Theoretic Reinforcement Learning [0.46040036610482665]
Cumulative Prospect Theory (CPT) has been developed to provide a better model for human-based decision-making supported by empirical evidence.
A few years ago, CPT has been combined with Reinforcement Learning (RL) to formulate a CPT policy optimization problem.
We show that our policy gradient algorithm scales better to larger state spaces compared to the existing zeroth order algorithm for solving the same problem.
arXiv Detail & Related papers (2024-10-03T15:45:39Z) - ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared
State Representation and Individual Policy Representation [31.9768280877473]
We propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re$2$)
All EA and RL policies share the same nonlinear state representation while maintaining individual linear policy representations.
Experiments on a range of continuous control tasks show that ERL-Re$2$ consistently outperforms advanced baselines and achieves the State Of The Art (SOTA)
arXiv Detail & Related papers (2022-10-26T10:34:48Z) - LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement
Learning [78.2286146954051]
LCRL implements model-free Reinforcement Learning (RL) algorithms over unknown Decision Processes (MDPs)
We present case studies to demonstrate the applicability, ease of use, scalability, and performance of LCRL.
arXiv Detail & Related papers (2022-09-21T13:21:00Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - Policy Mirror Descent for Regularized Reinforcement Learning: A
Generalized Framework with Linear Convergence [60.20076757208645]
This paper proposes a general policy mirror descent (GPMD) algorithm for solving regularized RL.
We demonstrate that our algorithm converges linearly over an entire range learning rates, in a dimension-free fashion, to the global solution.
arXiv Detail & Related papers (2021-05-24T02:21:34Z) - Provably Correct Optimization and Exploration with Non-linear Policies [65.60853260886516]
ENIAC is an actor-critic method that allows non-linear function approximation in the critic.
We show that under certain assumptions, the learner finds a near-optimal policy in $O(poly(d))$ exploration rounds.
We empirically evaluate this adaptation and show that it outperforms priors inspired by linear methods.
arXiv Detail & Related papers (2021-03-22T03:16:33Z) - Variational Policy Gradient Method for Reinforcement Learning with
General Utilities [38.54243339632217]
In recent years, reinforcement learning systems with general goals beyond a cumulative sum of rewards have gained traction.
In this paper, we consider policy in Decision Problems, where the objective converges a general concave utility function.
We derive a new Variational Policy Gradient Theorem for RL with general utilities.
arXiv Detail & Related papers (2020-07-04T17:51:53Z) - When Will Generative Adversarial Imitation Learning Algorithms Attain
Global Convergence [56.40794592158596]
We study generative adversarial imitation learning (GAIL) under general MDP and for nonlinear reward function classes.
This is the first systematic theoretical study of GAIL for global convergence.
arXiv Detail & Related papers (2020-06-24T06:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.