Offline Q-Learning on Diverse Multi-Task Data Both Scales And
Generalizes
- URL: http://arxiv.org/abs/2211.15144v2
- Date: Mon, 17 Apr 2023 18:45:23 GMT
- Title: Offline Q-Learning on Diverse Multi-Task Data Both Scales And
Generalizes
- Authors: Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey
Levine
- Abstract summary: offline Q-learning algorithms exhibit strong performance that scales with model capacity.
We train a single policy on 40 games with near-human performance using up-to 80 million parameter networks.
Compared to return-conditioned supervised approaches, offline Q-learning scales similarly with model capacity and has better performance, especially when the dataset is suboptimal.
- Score: 100.69714600180895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The potential of offline reinforcement learning (RL) is that high-capacity
models trained on large, heterogeneous datasets can lead to agents that
generalize broadly, analogously to similar advances in vision and NLP. However,
recent works argue that offline RL methods encounter unique challenges to
scaling up model capacity. Drawing on the learnings from these works, we
re-examine previous design choices and find that with appropriate choices:
ResNets, cross-entropy based distributional backups, and feature normalization,
offline Q-learning algorithms exhibit strong performance that scales with model
capacity. Using multi-task Atari as a testbed for scaling and generalization,
we train a single policy on 40 games with near-human performance using up-to 80
million parameter networks, finding that model performance scales favorably
with capacity. In contrast to prior work, we extrapolate beyond dataset
performance even when trained entirely on a large (400M transitions) but highly
suboptimal dataset (51% human-level performance). Compared to
return-conditioned supervised approaches, offline Q-learning scales similarly
with model capacity and has better performance, especially when the dataset is
suboptimal. Finally, we show that offline Q-learning with a diverse dataset is
sufficient to learn powerful representations that facilitate rapid transfer to
novel games and fast online learning on new variations of a training game,
improving over existing state-of-the-art representation learning approaches.
Related papers
- Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning [62.984693936073974]
Value-based reinforcement learning can learn effective policies for a wide range of multi-turn problems.
Current value-based RL methods have proven particularly challenging to scale to the setting of large language models.
We propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning problem.
arXiv Detail & Related papers (2024-11-07T21:36:52Z) - Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining [49.730897226510095]
We introduce JOWA: Jointly-Reinforced World-Action model, an offline model-based RL agent pretrained on Atari games with 6 billion tokens data.
Our largest agent, with 150 million parameters, 78.9% human-level performance on pretrained games using only 10% subsampled offline data, outperforming existing state-of-the-art large-scale offline RL baselines by 31.6% on averange.
arXiv Detail & Related papers (2024-10-01T10:25:03Z) - Tackling Long-Horizon Tasks with Model-based Offline Reinforcement Learning [6.345851712811528]
We introduce a novel model-based offline RL method, Lower Expectile Q-learning (LEQ), which enhances long-horizon task performance.
Our empirical results show that LEQ significantly outperforms previous model-based offline RL methods on long-horizon tasks.
LEQ achieves performance comparable to the state-of-the-art model-based and model-free offline RL methods on the NeoRL benchmark and the D4RL MuJoCo Gym tasks.
arXiv Detail & Related papers (2024-06-30T13:44:59Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Small Dataset, Big Gains: Enhancing Reinforcement Learning by Offline
Pre-Training with Model Based Augmentation [59.899714450049494]
offline pre-training can produce sub-optimal policies and lead to degraded online reinforcement learning performance.
We propose a model-based data augmentation strategy to maximize the benefits of offline reinforcement learning pre-training and reduce the scale of data needed to be effective.
arXiv Detail & Related papers (2023-12-15T14:49:41Z) - Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding [9.112203072394648]
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow.
Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples.
arXiv Detail & Related papers (2023-12-08T19:26:13Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.