Automated Relational Meta-learning
- URL: http://arxiv.org/abs/2001.00745v1
- Date: Fri, 3 Jan 2020 07:02:25 GMT
- Title: Automated Relational Meta-learning
- Authors: Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li,
Zhenhui Li
- Abstract summary: We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
- Score: 95.02216511235191
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to efficiently learn with small amount of data on new tasks,
meta-learning transfers knowledge learned from previous tasks to the new ones.
However, a critical challenge in meta-learning is the task heterogeneity which
cannot be well handled by traditional globally shared meta-learning methods. In
addition, current task-specific meta-learning methods may either suffer from
hand-crafted structure design or lack the capability to capture complex
relations between tasks. In this paper, motivated by the way of knowledge
organization in knowledge bases, we propose an automated relational
meta-learning (ARML) framework that automatically extracts the cross-task
relations and constructs the meta-knowledge graph. When a new task arrives, it
can quickly find the most relevant structure and tailor the learned structure
knowledge to the meta-learner. As a result, the proposed framework not only
addresses the challenge of task heterogeneity by a learned meta-knowledge
graph, but also increases the model interpretability. We conduct extensive
experiments on 2D toy regression and few-shot image classification and the
results demonstrate the superiority of ARML over state-of-the-art baselines.
Related papers
- Concept Discovery for Fast Adapatation [42.81705659613234]
We introduce concept discovery to the few-shot learning problem, where we achieve more effective adaptation by meta-learning the structure among the data features.
Our proposed method Concept-Based Model-Agnostic Meta-Learning (COMAML) has been shown to achieve consistent improvements in the structured data for both synthesized datasets and real-world datasets.
arXiv Detail & Related papers (2023-01-19T02:33:58Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.