Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification
- URL: http://arxiv.org/abs/2207.12346v1
- Date: Mon, 25 Jul 2022 17:01:29 GMT
- Title: Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification
- Authors: Rakshith Subramanyam, Mark Heimann, Jayram Thathachar, Rushil Anirudh,
Jayaraman J. Thiagarajan
- Abstract summary: We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
- Score: 28.38744876121834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model agnostic meta-learning algorithms aim to infer priors from several
observed tasks that can then be used to adapt to a new task with few examples.
Given the inherent diversity of tasks arising in existing benchmarks, recent
methods use separate, learnable structure, such as hierarchies or graphs, for
enabling task-specific adaptation of the prior. While these approaches have
produced significantly better meta learners, our goal is to improve their
performance when the heterogeneous task distribution contains challenging
distribution shifts and semantic disparities. To this end, we introduce CAML
(Contrastive Knowledge-Augmented Meta Learning), a novel approach for
knowledge-enhanced few-shot learning that evolves a knowledge graph to
effectively encode historical experience, and employs a contrastive
distillation strategy to leverage the encoded knowledge for task-aware
modulation of the base learner. Using standard benchmarks, we evaluate the
performance of CAML in different few-shot learning scenarios. In addition to
the standard few-shot task adaptation, we also consider the more challenging
multi-domain task adaptation and few-shot dataset generalization settings in
our empirical studies. Our results shows that CAML consistently outperforms
best known approaches and achieves improved generalization.
Related papers
- MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - Meta Navigator: Search for a Good Adaptation Policy for Few-shot
Learning [113.05118113697111]
Few-shot learning aims to adapt knowledge learned from previous tasks to novel tasks with only a limited amount of labeled data.
Research literature on few-shot learning exhibits great diversity, while different algorithms often excel at different few-shot learning scenarios.
We present Meta Navigator, a framework that attempts to solve the limitation in few-shot learning by seeking a higher-level strategy.
arXiv Detail & Related papers (2021-09-13T07:20:01Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - A Channel Coding Benchmark for Meta-Learning [21.2424398453955]
Several important issues in meta-learning have proven hard to study thus far.
We propose the channel coding problem as a benchmark for meta-learning.
Going forward, this benchmark provides a tool for the community to study the capabilities and limitations of meta-learning.
arXiv Detail & Related papers (2021-07-15T19:37:43Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Revisiting Unsupervised Meta-Learning: Amplifying or Compensating for
the Characteristics of Few-Shot Tasks [30.893785366366078]
We develop a practical approach towards few-shot image classification, where a visual recognition system is constructed with limited data.
We find that the base class set labels are not necessary, and discriminative embeddings could be meta-learned in an unsupervised manner.
Experiments on few-shot learning benchmarks verify our approaches outperform previous methods by a 4-10% performance gap.
arXiv Detail & Related papers (2020-11-30T10:08:35Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Structured Prediction for Conditional Meta-Learning [44.30857707980074]
We propose a new perspective on conditional meta-learning via structured prediction.
We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions.
Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
arXiv Detail & Related papers (2020-02-20T15:24:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.