EEML: Ensemble Embedded Meta-learning
- URL: http://arxiv.org/abs/2206.09195v1
- Date: Sat, 18 Jun 2022 12:37:17 GMT
- Title: EEML: Ensemble Embedded Meta-learning
- Authors: Geng Li, Boyuan Ren, Hongzhi Wang
- Abstract summary: We propose an ensemble embedded meta-learning algorithm (EEML) that explicitly utilizes multi-model-ensemble to organize prior knowledge into diverse specific experts.
We rely on a task embedding cluster mechanism to deliver diverse tasks to matching experts in training process and instruct how experts collaborate in test phase.
The experimental results show that the proposed method outperforms recent state-of-the-arts easily in few-shot learning problem.
- Score: 5.9514420658483935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To accelerate learning process with few samples, meta-learning resorts to
prior knowledge from previous tasks. However, the inconsistent task
distribution and heterogeneity is hard to be handled through a global sharing
model initialization. In this paper, based on gradient-based meta-learning, we
propose an ensemble embedded meta-learning algorithm (EEML) that explicitly
utilizes multi-model-ensemble to organize prior knowledge into diverse specific
experts. We rely on a task embedding cluster mechanism to deliver diverse tasks
to matching experts in training process and instruct how experts collaborate in
test phase. As a result, the multi experts can focus on their own area of
expertise and cooperate in upcoming task to solve the task heterogeneity. The
experimental results show that the proposed method outperforms recent
state-of-the-arts easily in few-shot learning problem, which validates the
importance of differentiation and cooperation.
Related papers
- Towards Multi-Objective High-Dimensional Feature Selection via
Evolutionary Multitasking [63.91518180604101]
This paper develops a novel EMT framework for high-dimensional feature selection problems, namely MO-FSEMT.
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions.
arXiv Detail & Related papers (2024-01-03T06:34:39Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Meta-Reinforcement Learning via Exploratory Task Clustering [43.936406999765886]
We develop a dedicated exploratory policy to discover task structures via divide-and-conquer.
The knowledge of the identified clusters helps to narrow the search space of task-specific information.
Experiments on various MuJoCo tasks showed the proposed method can unravel cluster structures effectively in both rewards and state dynamics.
arXiv Detail & Related papers (2023-02-15T21:42:38Z) - Modular Approach to Machine Reading Comprehension: Mixture of Task-Aware
Experts [0.5801044612920815]
We present a Mixture of Task-Aware Experts Network for Machine Reading on a relatively small dataset.
We focus on the issue of common-sense learning, enforcing the common ground knowledge.
We take inspi ration on the recent advancements of multitask and transfer learning.
arXiv Detail & Related papers (2022-10-04T17:13:41Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.