Meta-Learning via Classifier(-free) Guidance
- URL: http://arxiv.org/abs/2210.08942v1
- Date: Mon, 17 Oct 2022 11:09:35 GMT
- Title: Meta-Learning via Classifier(-free) Guidance
- Authors: Elvis Nava, Seijin Kobayashi, Yifei Yin, Robert K. Katzschmann,
Benjamin F. Grewe
- Abstract summary: State-of-the-art meta-learning techniques do not optimize for zero-shot adaptation to unseen tasks.
We propose meta-learning techniques that use natural language guidance to achieve higher zero-shot performance.
- Score: 5.812784742024491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art meta-learning techniques do not optimize for zero-shot
adaptation to unseen tasks, a setting in which humans excel. On the contrary,
meta-learning algorithms learn hyperparameters and weight initializations that
explicitly optimize for few-shot learning performance. In this work, we take
inspiration from recent advances in generative modeling and
language-conditioned image synthesis to propose meta-learning techniques that
use natural language guidance to achieve higher zero-shot performance compared
to the state-of-the-art. We do so by recasting the meta-learning problem as a
multi-modal generative modeling problem: given a task, we consider its adapted
neural network weights and its natural language description as equivalent
multi-modal task representations. We first train an unconditional generative
hypernetwork model to produce neural network weights; then we train a second
"guidance" model that, given a natural language task description, traverses the
hypernetwork latent space to find high-performance task-adapted weights in a
zero-shot manner. We explore two alternative approaches for latent space
guidance: "HyperCLIP"-based classifier guidance and a conditional Hypernetwork
Latent Diffusion Model ("HyperLDM"), which we show to benefit from the
classifier-free guidance technique common in image generation. Finally, we
demonstrate that our approaches outperform existing meta-learning methods with
zero-shot learning experiments on our Meta-VQA dataset, which we specifically
constructed to reflect the multi-modal meta-learning setting.
Related papers
- Meta-Task Prompting Elicits Embeddings from Large Language Models [54.757445048329735]
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation.
We generate high-quality sentence embeddings from Large Language Models without the need for model fine-tuning.
Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.
arXiv Detail & Related papers (2024-02-28T16:35:52Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Model-Based Reinforcement Learning with Multi-Task Offline Pretraining [59.82457030180094]
We present a model-based RL method that learns to transfer potentially useful dynamics and action demonstrations from offline data to a novel task.
The main idea is to use the world models not only as simulators for behavior learning but also as tools to measure the task relevance.
We demonstrate the advantages of our approach compared with the state-of-the-art methods in Meta-World and DeepMind Control Suite.
arXiv Detail & Related papers (2023-06-06T02:24:41Z) - eP-ALM: Efficient Perceptual Augmentation of Language Models [70.47962271121389]
We propose to direct effort to efficient adaptations of existing models, and propose to augment Language Models with perception.
Existing approaches for adapting pretrained models for vision-language tasks still rely on several key components that hinder their efficiency.
We show that by freezing more than 99% of total parameters, training only one linear projection layer, and prepending only one trainable token, our approach (dubbed eP-ALM) significantly outperforms other baselines on VQA and Captioning.
arXiv Detail & Related papers (2023-03-20T19:20:34Z) - Meta Learning to Bridge Vision and Language Models for Multimodal
Few-Shot Learning [38.37682598345653]
We introduce a multimodal meta-learning approach to bridge the gap between vision and language models.
We define a meta-mapper network, acting as a meta-learner, to efficiently bridge frozen large-scale vision and language models.
We evaluate our approach on recently proposed multimodal few-shot benchmarks, measuring how rapidly the model can bind novel visual concepts to words.
arXiv Detail & Related papers (2023-02-28T17:46:18Z) - ZhichunRoad at Amazon KDD Cup 2022: MultiTask Pre-Training for
E-Commerce Product Search [4.220439000486713]
We propose a robust multilingual model to improve the quality of search results.
In pre-training stage, we adopt mlm task, classification task and contrastive learning task.
In fine-tuning stage, we use confident learning, exponential moving average method (EMA), adversarial training (FGM) and regularized dropout strategy (R-Drop)
arXiv Detail & Related papers (2023-01-31T07:31:34Z) - Neural Routing in Meta Learning [9.070747377130472]
We aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization layer.
We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks.
arXiv Detail & Related papers (2022-10-14T16:31:24Z) - Unsupervised Meta-Learning through Latent-Space Interpolation in
Generative Models [11.943374020641214]
We describe an approach that generates meta-tasks using generative models.
We find that the proposed approach, LAtent Space Interpolation Unsupervised Meta-learning (LASIUM), outperforms or is competitive with current unsupervised learning baselines.
arXiv Detail & Related papers (2020-06-18T02:10:56Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.