Reward-Conditioned Policies
- URL: http://arxiv.org/abs/1912.13465v1
- Date: Tue, 31 Dec 2019 18:07:43 GMT
- Title: Reward-Conditioned Policies
- Authors: Aviral Kumar, Xue Bin Peng, Sergey Levine
- Abstract summary: imitation learning requires near-optimal expert data.
Can we learn effective policies via supervised learning without demonstrations?
We show how such an approach can be derived as a principled method for policy search.
- Score: 100.64167842905069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning offers the promise of automating the acquisition of
complex behavioral skills. However, compared to commonly used and
well-understood supervised learning methods, reinforcement learning algorithms
can be brittle, difficult to use and tune, and sensitive to seemingly innocuous
implementation decisions. In contrast, imitation learning utilizes standard and
well-understood supervised learning methods, but requires near-optimal expert
data. Can we learn effective policies via supervised learning without
demonstrations? The main idea that we explore in this work is that non-expert
trajectories collected from sub-optimal policies can be viewed as optimal
supervision, not for maximizing the reward, but for matching the reward of the
given trajectory. By then conditioning the policy on the numerical value of the
reward, we can obtain a policy that generalizes to larger returns. We show how
such an approach can be derived as a principled method for policy search,
discuss several variants, and compare the method experimentally to a variety of
current reinforcement learning methods on standard benchmarks.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.