Taming Sparsely Activated Transformer with Stochastic Experts
- URL: http://arxiv.org/abs/2110.04260v2
- Date: Tue, 12 Oct 2021 02:46:26 GMT
- Title: Taming Sparsely Activated Transformer with Stochastic Experts
- Authors: Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan,
Ruofei Zhang, Tuo Zhao, Jianfeng Gao
- Abstract summary: Sparsely activated models (SAMs) can easily scale to have outrageously large amounts of parameters without significant increase in computational cost.
In this paper, we propose a new expert-based model, THOR (Transformer witH StOchastic ExpeRts)
Unlike classic expert-based models, such as the Switch Transformer, experts in THOR are randomly activated for each input during training and inference.
- Score: 76.0711573018493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can
easily scale to have outrageously large amounts of parameters without
significant increase in computational cost. However, SAMs are reported to be
parameter inefficient such that larger models do not always lead to better
performance. While most on-going research focuses on improving SAMs models by
exploring methods of routing inputs to experts, our analysis reveals that such
research might not lead to the solution we expect, i.e., the commonly-used
routing methods based on gating mechanisms do not work better than randomly
routing inputs to experts. In this paper, we propose a new expert-based model,
THOR (Transformer witH StOchastic ExpeRts). Unlike classic expert-based models,
such as the Switch Transformer, experts in THOR are randomly activated for each
input during training and inference. THOR models are trained using a
consistency regularized loss, where experts learn not only from training data
but also from other experts as teachers, such that all the experts make
consistent predictions. We validate the effectiveness of THOR on machine
translation tasks. Results show that THOR models are more parameter efficient
in that they significantly outperform the Transformer and MoE models across
various settings. For example, in multilingual translation, THOR outperforms
the Switch Transformer by 2 BLEU scores, and obtains the same BLEU score as
that of a state-of-the-art MoE model that is 18 times larger. Our code is
publicly available at:
https://github.com/microsoft/Stochastic-Mixture-of-Experts.
Related papers
- Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast [58.98411447739218]
Mixture-of-Experts (MoE) has emerged as a prominent architecture for scaling model size while maintaining computational efficiency.
We propose Self-Contrast Mixture-of-Experts (SCMoE), a training-free strategy that utilizes unchosen experts in a self-contrast manner during inference.
Our method is conceptually simple and computationally lightweight, as it incurs minimal latency compared to greedy decoding.
arXiv Detail & Related papers (2024-05-23T12:45:29Z) - Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models [33.834215393960605]
We introduce the Dynamic Mixture of Experts (DynMoE) technique to enhance the efficiency of training and inference for Transformer-based foundational models.
DynMoE incorporates a novel gating method that enables each token to automatically determine the number of experts to activate.
Our results demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks.
arXiv Detail & Related papers (2024-05-23T08:18:30Z) - Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters [11.05223262950967]
Mixture of Experts (MoE) architectures have recently started burgeoning due to their ability to scale model's capacity while maintaining the computational cost affordable.
This paper attempts to demystify the use of MoE for parameter-efficient fine-tuning of Audio Spectrogram Transformers to audio and speech downstream tasks.
It exploits adapters as the experts and, leveraging the recent Soft MoE method, it relies on a soft assignment between the input tokens and experts to keep the computational time limited.
arXiv Detail & Related papers (2024-02-01T18:16:04Z) - Soft Merging of Experts with Adaptive Routing [38.962451264172856]
We introduce Soft Merging of Experts with Adaptive Routing (SMEAR)
SMEAR avoids discrete routing by using a single "merged" expert constructed via a weighted average of all of the experts' parameters.
We empirically validate that models using SMEAR outperform models that route based on metadata or learn sparse routing through gradient estimation.
arXiv Detail & Related papers (2023-06-06T15:04:31Z) - Adapted Multimodal BERT with Layer-wise Fusion for Sentiment Analysis [84.12658971655253]
We propose Adapted Multimodal BERT, a BERT-based architecture for multimodal tasks.
adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations.
In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise.
arXiv Detail & Related papers (2022-12-01T17:31:42Z) - Gating Dropout: Communication-efficient Regularization for Sparsely
Activated Transformers [78.77361169167149]
We propose emphGating Dropout, which allows tokens to ignore the gating network and stay at their local machines.
Similar to traditional dropout, we also show that Gating Dropout has a regularization effect during training, resulting in improved generalization performance.
arXiv Detail & Related papers (2022-05-28T05:12:43Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Efficient pre-training objectives for Transformers [84.64393460397471]
We study several efficient pre-training objectives for Transformers-based models.
We prove that eliminating the MASK token and considering the whole output during the loss are essential choices to improve performance.
arXiv Detail & Related papers (2021-04-20T00:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.