Efficient Multi-Task and Transfer Reinforcement Learning with
Parameter-Compositional Framework
- URL: http://arxiv.org/abs/2306.01839v1
- Date: Fri, 2 Jun 2023 18:00:33 GMT
- Title: Efficient Multi-Task and Transfer Reinforcement Learning with
Parameter-Compositional Framework
- Authors: Lingfeng Sun, Haichao Zhang, Wei Xu, Masayoshi Tomizuka
- Abstract summary: We investigate the potential of improving multi-task training and leveraging it for transferring in the reinforcement learning setting.
We propose a transferring approach with a parameter-compositional formulation.
Experimental results demonstrate that the proposed approach can have improved performance in the multi-task training stage.
- Score: 44.43196786555784
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we investigate the potential of improving multi-task training
and also leveraging it for transferring in the reinforcement learning setting.
We identify several challenges towards this goal and propose a transferring
approach with a parameter-compositional formulation. We investigate ways to
improve the training of multi-task reinforcement learning which serves as the
foundation for transferring. Then we conduct a number of transferring
experiments on various manipulation tasks. Experimental results demonstrate
that the proposed approach can have improved performance in the multi-task
training stage, and further show effective transferring in terms of both sample
efficiency and performance.
Related papers
- Exploring the Effectiveness and Consistency of Task Selection in Intermediate-Task Transfer Learning [21.652389166495407]
We show that the transfer performance exhibits severe variance across different source tasks and training seeds.
Compared to embedding-free methods and text embeddings, task embeddings constructed from fine-tuned weights can better estimate task transferability.
We introduce a novel method that measures pairwise token similarity using maximum inner product search, leading to the highest performance in task prediction.
arXiv Detail & Related papers (2024-07-23T07:31:43Z) - PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning [28.353530290015794]
We propose PEMT, a novel parameter-efficient fine-tuning framework based on multi-task transfer learning.
We conduct experiments on a broad range of tasks over 17 datasets.
arXiv Detail & Related papers (2024-02-23T03:59:18Z) - Distill Knowledge in Multi-task Reinforcement Learning with
Optimal-Transport Regularization [0.24475591916185496]
In multi-task reinforcement learning, it is possible to improve the data efficiency of training agents by transferring knowledge from other different but related tasks.
Traditional methods rely on Kullback-Leibler regularization to stabilize the transfer of knowledge from one task to the others.
In this work, we explore the direction of replacing the Kullback-Leibler divergence with a novel Optimal transport-based regularization.
arXiv Detail & Related papers (2023-09-27T12:06:34Z) - An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - On Transferability of Prompt Tuning for Natural Language Understanding [63.29235426932978]
We investigate the transferability of soft prompts across different tasks and models.
We find that trained soft prompts can well transfer to similar tasks and initialize PT for them to accelerate training and improve performance.
Our findings show that improving PT with knowledge transfer is possible and promising, while prompts' cross-task transferability is generally better than the cross-model transferability.
arXiv Detail & Related papers (2021-11-12T13:39:28Z) - Efficient Reinforcement Learning in Resource Allocation Problems Through
Permutation Invariant Multi-task Learning [6.247939901619901]
We show that in certain settings, the available data can be dramatically increased through a form of multi-task learning.
We provide a theoretical performance bound for the gain in sample efficiency under this setting.
This motivates a new approach to multi-task learning, which involves the design of an appropriate neural network architecture and a prioritized task-sampling strategy.
arXiv Detail & Related papers (2021-02-18T14:13:02Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.