Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional
MoEs
- URL: http://arxiv.org/abs/2206.04674v1
- Date: Thu, 9 Jun 2022 17:59:59 GMT
- Title: Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional
MoEs
- Authors: Jinguo Zhu, Xizhou Zhu, Wenhai Wang, Xiaohua Wang, Hongsheng Li,
Xiaogang Wang, Jifeng Dai
- Abstract summary: We find that interference among different tasks and modalities is the main factor to this phenomenon.
We introduce the Conditional Mixture-of-Experts (Conditional MoEs) to generalist models.
Code and pre-trained generalist models shall be released.
- Score: 63.936622239286685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To build an artificial neural network like the biological intelligence
system, recent works have unified numerous tasks into a generalist model, which
can process various tasks with shared parameters and do not have any
task-specific modules. While generalist models achieve promising results on
various benchmarks, they have performance degradation on some tasks compared
with task-specialized models. In this work, we find that interference among
different tasks and modalities is the main factor to this phenomenon. To
mitigate such interference, we introduce the Conditional Mixture-of-Experts
(Conditional MoEs) to generalist models. Routing strategies under different
levels of conditions are proposed to take both the training/inference cost and
generalization ability into account. By incorporating the proposed Conditional
MoEs, the recently proposed generalist model Uni-Perceiver can effectively
mitigate the interference across tasks and modalities, and achieves
state-of-the-art results on a series of downstream tasks via prompt tuning on
1% of downstream data. Moreover, the introduction of Conditional MoEs still
holds the generalization ability of generalist models to conduct zero-shot
inference on new tasks, e.g., video-text retrieval and video caption. Code and
pre-trained generalist models shall be released.
Related papers
- The Non-Local Model Merging Problem: Permutation Symmetries and Variance Collapse [25.002218722102505]
Model merging aims to efficiently combine the weights of multiple expert models, each trained on a specific task, into a single multi-task model.
This work explores the more challenging scenario of "non-local" merging.
Standard merging techniques often fail to generalize effectively in this non-local setting.
We propose a multi-task technique to re-scale and shift the output activations of the merged model for each task, aligning its output statistics with those of the corresponding task-specific expert models.
arXiv Detail & Related papers (2024-10-16T17:41:59Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Enabling Natural Zero-Shot Prompting on Encoder Models via Statement-Tuning [55.265138447400744]
Statement-Tuning is a technique that models discriminative tasks as a set of finite statements and trains an encoder model to discriminate between the potential statements to determine the label.
Experimental results demonstrate that Statement-Tuning achieves competitive performance compared to state-of-the-art LLMs with significantly fewer parameters.
The study investigates the impact of several design choices on few-shot and zero-shot generalization, revealing that Statement-Tuning can achieve strong performance with modest training data.
arXiv Detail & Related papers (2024-04-19T14:05:03Z) - Domain Generalization via Balancing Training Difficulty and Model
Capability [61.053202176230904]
Domain generalization (DG) aims to learn domain-generalizable models from one or multiple source domains that can perform well in unseen target domains.
Despite its recent progress, most existing work suffers from the misalignment between the difficulty level of training samples and the capability of contemporarily trained models.
We design MoDify, a Momentum Difficulty framework that tackles the misalignment by balancing the seesaw between the model's capability and the samples' difficulties.
arXiv Detail & Related papers (2023-09-02T07:09:23Z) - Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and
Vision-Language Tasks [86.66733026149892]
We propose Uni-Perceiver v2, which is the first generalist model capable of handling major large-scale vision and vision-gnostic tasks.
Specifically, images are encoded as general region proposals, while texts are encoded via a Transformer-based language model.
Uni-Perceiver v2 achieves competitive performance on a broad range of vision and vision-language tasks.
arXiv Detail & Related papers (2022-11-17T18:59:52Z) - Uni-Perceiver: Pre-training Unified Architecture for Generic Perception
for Zero-shot and Few-shot Tasks [73.63892022944198]
We present a generic perception architecture named Uni-Perceiver.
It processes a variety of modalities and tasks with unified modeling and shared parameters.
Results show that our pre-trained model without any tuning can achieve reasonable performance even on novel tasks.
arXiv Detail & Related papers (2021-12-02T18:59:50Z) - Generalized Hidden Parameter MDPs Transferable Model-based RL in a
Handful of Trials [13.051708608864539]
Generalized Hidden MDPs (GHP-MDPs) describe a family of MDPs where both dynamics and reward can change as a function of hidden parameters that vary across tasks.
We experimentally demonstrate state-of-the-art performance and sample-efficiency on a new challenging MuJoCo task using reward and dynamics latent spaces.
arXiv Detail & Related papers (2020-02-08T02:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.