Shattering the Agent-Environment Interface for Fine-Tuning Inclusive
Language Models
- URL: http://arxiv.org/abs/2305.11455v1
- Date: Fri, 19 May 2023 06:21:15 GMT
- Title: Shattering the Agent-Environment Interface for Fine-Tuning Inclusive
Language Models
- Authors: Wanqiao Xu, Shi Dong, Dilip Arumugam, Benjamin Van Roy
- Abstract summary: In this work, we adopt a novel perspective wherein a pre-trained language model is itself simultaneously a policy, reward function, and transition function.
An immediate consequence of this is that reward learning and language model fine-tuning can be performed jointly and directly, without requiring any further downstream policy optimization.
- Score: 24.107358120517336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A centerpiece of the ever-popular reinforcement learning from human feedback
(RLHF) approach to fine-tuning autoregressive language models is the explicit
training of a reward model to emulate human feedback, distinct from the
language model itself. This reward model is then coupled with policy-gradient
methods to dramatically improve the alignment between language model outputs
and desired responses. In this work, we adopt a novel perspective wherein a
pre-trained language model is itself simultaneously a policy, reward function,
and transition function. An immediate consequence of this is that reward
learning and language model fine-tuning can be performed jointly and directly,
without requiring any further downstream policy optimization. While this
perspective does indeed break the traditional agent-environment interface, we
nevertheless maintain that there can be enormous statistical benefits afforded
by bringing to bear traditional algorithmic concepts from reinforcement
learning. Our experiments demonstrate one concrete instance of this through
efficient exploration based on the representation and resolution of epistemic
uncertainty. In order to illustrate these ideas in a transparent manner, we
restrict attention to a simple didactic data generating process and leave for
future work extension to systems of practical scale.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.