Knowledge-Aware Meta-learning for Low-Resource Text Classification
- URL: http://arxiv.org/abs/2109.04707v1
- Date: Fri, 10 Sep 2021 07:20:43 GMT
- Title: Knowledge-Aware Meta-learning for Low-Resource Text Classification
- Authors: Huaxiu Yao, Yingxin Wu, Maruan Al-Shedivat, Eric P. Xing
- Abstract summary: This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
- Score: 87.89624590579903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning has achieved great success in leveraging the historical learned
knowledge to facilitate the learning process of the new task. However, merely
learning the knowledge from the historical tasks, adopted by current
meta-learning algorithms, may not generalize well to testing tasks when they
are not well-supported by training tasks. This paper studies a low-resource
text classification problem and bridges the gap between meta-training and
meta-testing tasks by leveraging the external knowledge bases. Specifically, we
propose KGML to introduce additional representation for each sentence learned
from the extracted sentence-specific knowledge graph. The extensive experiments
on three datasets demonstrate the effectiveness of KGML under both supervised
adaptation and unsupervised adaptation settings.
Related papers
- Meta-Learning and representation learner: A short theoretical note [0.0]
Meta-learning is a subfield of machine learning where the goal is to develop models and algorithms that can learn from various tasks.
Unlike traditional machine learning methods focusing on learning a specific task, meta-learning aims to leverage experience from previous tasks to enhance future learning.
arXiv Detail & Related papers (2024-07-04T23:47:10Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification [28.38744876121834]
We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
arXiv Detail & Related papers (2022-07-25T17:01:29Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - Information-Theoretic Generalization Bounds for Meta-Learning and
Applications [42.275148861039895]
Key performance measure for meta-learning is the meta-generalization gap.
This paper presents novel information-theoretic upper bounds on the meta-generalization gap.
arXiv Detail & Related papers (2020-05-09T05:48:01Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.