Skill Expansion and Composition in Parameter Space
- URL: http://arxiv.org/abs/2502.05932v1
- Date: Sun, 09 Feb 2025 15:22:38 GMT
- Title: Skill Expansion and Composition in Parameter Space
- Authors: Tenglong Liu, Jianxiong Li, Yinan Zheng, Haoyi Niu, Yixing Lan, Xin Xu, Xianyuan Zhan,
- Abstract summary: Parametric Skill Expansion and Composition (PSEC) is a new framework designed to iteratively evolve the agents' capabilities.
PSEC exhibits superior capacity to leverage prior knowledge to efficiently tackle new challenges.
- Score: 17.016614374151747
- License:
- Abstract: Humans excel at reusing prior knowledge to address new challenges and developing skills while solving problems. This paradigm becomes increasingly popular in the development of autonomous agents, as it develops systems that can self-evolve in response to new challenges like human beings. However, previous methods suffer from limited training efficiency when expanding new skills and fail to fully leverage prior knowledge to facilitate new task learning. In this paper, we propose Parametric Skill Expansion and Composition (PSEC), a new framework designed to iteratively evolve the agents' capabilities and efficiently address new challenges by maintaining a manageable skill library. This library can progressively integrate skill primitives as plug-and-play Low-Rank Adaptation (LoRA) modules in parameter-efficient finetuning, facilitating efficient and flexible skill expansion. This structure also enables the direct skill compositions in parameter space by merging LoRA modules that encode different skills, leveraging shared information across skills to effectively program new skills. Based on this, we propose a context-aware module to dynamically activate different skills to collaboratively handle new tasks. Empowering diverse applications including multi-objective composition, dynamics shift, and continual policy shift, the results on D4RL, DSRL benchmarks, and the DeepMind Control Suite show that PSEC exhibits superior capacity to leverage prior knowledge to efficiently tackle new challenges, as well as expand its skill libraries to evolve the capabilities. Project website: https://ltlhuuu.github.io/PSEC/.
Related papers
- Latent-Predictive Empowerment: Measuring Empowerment without a Simulator [56.53777237504011]
We present Latent-Predictive Empowerment (LPE), an algorithm that can compute empowerment in a more practical manner.
LPE learns large skillsets by maximizing an objective that is a principled replacement for the mutual information between skills and states.
arXiv Detail & Related papers (2024-10-15T00:41:18Z) - KIF: Knowledge Identification and Fusion for Language Model Continual Learning [41.28933724210434]
We introduce a novel framework for language models, named Knowledge Identification and Fusion (KIF)
KIF segregates the model into'skill units' based on parameter dependencies, allowing for more precise control.
It employs a novel group-wise knowledge identification technique to ascertain the importance distribution of skill units for a new task.
As a result, KIF achieves an optimal balance between retaining prior knowledge and excelling in new tasks.
arXiv Detail & Related papers (2024-08-09T17:44:45Z) - Learn it or Leave it: Module Composition and Pruning for Continual Learning [48.07144492109635]
MoCL-P is a lightweight continual learning method that balances knowledge integration and computational overhead.
Our evaluation shows that MoCL-P achieves state-of-the-art performance and improves parameter efficiency by up to three times.
arXiv Detail & Related papers (2024-06-26T19:18:28Z) - Agentic Skill Discovery [19.5703917813767]
Language-conditioned robotic skills make it possible to apply the high-level reasoning of Large Language Models (LLMs) to low-level robotic control.
A remaining challenge is to acquire a diverse set of fundamental skills.
We introduce a novel framework for skill discovery that is entirely driven by LLMs.
arXiv Detail & Related papers (2024-05-23T19:44:03Z) - Customizable Combination of Parameter-Efficient Modules for Multi-Task
Learning [11.260650180067278]
We introduce a novel approach that combines task-common skills and task-specific skills.
A skill assignment matrix is jointly learned.
Our findings demonstrate that C-Poly outperforms fully-shared, task-specific, and skill-indistinguishable baselines.
arXiv Detail & Related papers (2023-12-06T02:47:56Z) - Variational Curriculum Reinforcement Learning for Unsupervised Discovery
of Skills [25.326624139426514]
We propose a novel approach to unsupervised skill discovery based on information theory, called Value Uncertainty Vari Curriculum Curriculum (VUVC)
We prove that, under regularity conditions, VUVC accelerates the increase of entropy in the visited states compared to the uniform curriculum.
We also demonstrate that the skills discovered by our method successfully complete a real-world robot navigation task in a zero-shot setup.
arXiv Detail & Related papers (2023-10-30T10:34:25Z) - Hierarchical Empowerment: Towards Tractable Empowerment-Based Skill
Learning [65.41865750258775]
General purpose agents will require large repertoires of skills.
We introduce a new framework, Hierarchical Empowerment, that makes computing empowerment more tractable.
In a popular ant navigation domain, our four level agents are able to learn skills that cover a surface area over two orders of magnitude larger than prior work.
arXiv Detail & Related papers (2023-07-06T02:27:05Z) - Hierarchical Deep Counterfactual Regret Minimization [53.86223883060367]
In this paper, we introduce the first hierarchical version of Deep CFR, an innovative method that boosts learning efficiency in tasks involving extensively large state spaces and deep game trees.
A notable advantage of HDCFR over previous works is its ability to facilitate learning with predefined (human) expertise and foster the acquisition of skills that can be transferred to similar tasks.
arXiv Detail & Related papers (2023-05-27T02:05:41Z) - ASPiRe:Adaptive Skill Priors for Reinforcement Learning [28.376277797807706]
ASPiRe (Adaptive Skill Prior for RL) is a new approach to accelerate reinforcement learning.
Our framework learns a library of different distinction skill priors from a collection of specialized datasets.
Our experiments demonstrate that ASPiRe can significantly accelerate the learning of new downstream tasks.
arXiv Detail & Related papers (2022-09-30T03:22:27Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Efficient Feature Transformations for Discriminative and Generative
Continual Learning [98.10425163678082]
We propose a simple task-specific feature map transformation strategy for continual learning.
Theses provide powerful flexibility for learning new tasks, achieved with minimal parameters added to the base architecture.
We demonstrate the efficacy and efficiency of our method with an extensive set of experiments in discriminative (CIFAR-100 and ImageNet-1K) and generative sequences of tasks.
arXiv Detail & Related papers (2021-03-25T01:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.