Learning More Generalized Experts by Merging Experts in Mixture-of-Experts
- URL: http://arxiv.org/abs/2405.11530v1
- Date: Sun, 19 May 2024 11:55:48 GMT
- Title: Learning More Generalized Experts by Merging Experts in Mixture-of-Experts
- Authors: Sejik Park,
- Abstract summary: We show that incorporating a shared layer in a mixture-of-experts can lead to performance degradation.
We merge the two most frequently selected experts and update the least frequently selected expert using the combination of experts.
Our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
- Score: 0.5221459608786241
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We observe that incorporating a shared layer in a mixture-of-experts can lead to performance degradation. This leads us to hypothesize that learning shared features poses challenges in deep learning, potentially caused by the same feature being learned as various different features. To address this issue, we track each expert's usage frequency and merge the two most frequently selected experts. We then update the least frequently selected expert using the combination of experts. This approach, combined with the subsequent learning of the router's expert selection, allows the model to determine if the most frequently selected experts have learned the same feature differently. If they have, the combined expert can be further trained to learn a more general feature. Consequently, our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
Related papers
- HoME: Hierarchy of Multi-Gate Experts for Multi-Task Learning at Kuaishou [19.113649341888532]
We present the practical problems and the lessons learned at short-video services from Kuaishou.
In industry, a widely-used multi-task framework is the Mixture-of-Experts (MoE) paradigm.
arXiv Detail & Related papers (2024-08-10T04:25:48Z) - HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts [25.504602853436047]
Mixture of Experts (MoE) for language models has been proven effective in augmenting the capacity of models by dynamically routing each input token to a specific subset of experts for processing.
We propose HyperMoE, a novel MoE framework built upon Hypernetworks.
This framework integrates the computational processes of MoE with the concept of knowledge transferring in multi-task learning.
arXiv Detail & Related papers (2024-02-20T02:09:55Z) - Divide and not forget: Ensemble of selectively trained experts in Continual Learning [0.2886273197127056]
Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know.
A trend in this area is to use a mixture-of-expert technique, where different models work together to solve the task.
SEED selects only one, the most optimal expert for a considered task, and uses data from this task to fine-tune only this expert.
arXiv Detail & Related papers (2024-01-18T18:25:29Z) - Inverse Reinforcement Learning with Sub-optimal Experts [56.553106680769474]
We study the theoretical properties of the class of reward functions that are compatible with a given set of experts.
Our results show that the presence of multiple sub-optimal experts can significantly shrink the set of compatible rewards.
We analyze a uniform sampling algorithm that results in being minimax optimal whenever the sub-optimal experts' performance level is sufficiently close to the one of the optimal agent.
arXiv Detail & Related papers (2024-01-08T12:39:25Z) - Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy [84.11508381847929]
Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks.
We propose M-SMoE, which leverages routing statistics to guide expert merging.
Our MC-SMoE achieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in performance.
arXiv Detail & Related papers (2023-10-02T16:51:32Z) - NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition [63.90327120065928]
We propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning.
To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed.
In order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining.
arXiv Detail & Related papers (2023-06-29T06:10:40Z) - Bayesian Q-learning With Imperfect Expert Demonstrations [56.55609745121237]
We propose a novel algorithm to speed up Q-learning with the help of a limited amount of imperfect expert demonstrations.
We evaluate our approach on a sparse-reward chain environment and six more complicated Atari games with delayed rewards.
arXiv Detail & Related papers (2022-10-01T17:38:19Z) - Nested Collaborative Learning for Long-Tailed Visual Recognition [71.6074806468641]
NCL consists of two core components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD)
To learn representations more thoroughly, both NIL and NBOD are formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective.
In the NCL, the learning from two perspectives is nested, highly related and complementary, and helps the network to capture not only global and robust features but also meticulous distinguishing ability.
arXiv Detail & Related papers (2022-03-29T08:55:39Z) - Online Learning with Uncertain Feedback Graphs [12.805267089186533]
The relationship among experts can be captured by a feedback graph, which can be used to assist the learner's decision making.
In practice, the nominal feedback graph often entails uncertainties, which renders it impossible to reveal the actual relationship among experts.
The present work studies various cases of potential uncertainties, and develops novel online learning algorithms to deal with them.
arXiv Detail & Related papers (2021-06-15T21:21:30Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.