Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning
- URL: http://arxiv.org/abs/2110.03909v1
- Date: Fri, 8 Oct 2021 06:07:21 GMT
- Title: Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning
- Authors: Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaesik Min,
Kyoung Mu Lee
- Abstract summary: In few-shot learning scenarios, the challenge is to generalize and perform well on new unseen examples.
We introduce a new meta-learning framework with a loss function that adapts to each task.
Our proposed framework, named Meta-Learning with Task-Adaptive Loss Function (MeTAL), demonstrates the effectiveness and the flexibility across various domains.
- Score: 50.59295648948287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In few-shot learning scenarios, the challenge is to generalize and perform
well on new unseen examples when only very few labeled examples are available
for each task. Model-agnostic meta-learning (MAML) has gained the popularity as
one of the representative few-shot learning methods for its flexibility and
applicability to diverse problems. However, MAML and its variants often resort
to a simple loss function without any auxiliary loss function or regularization
terms that can help achieve better generalization. The problem lies in that
each application and task may require different auxiliary loss function,
especially when tasks are diverse and distinct. Instead of attempting to
hand-design an auxiliary loss function for each application and task, we
introduce a new meta-learning framework with a loss function that adapts to
each task. Our proposed framework, named Meta-Learning with Task-Adaptive Loss
Function (MeTAL), demonstrates the effectiveness and the flexibility across
various domains, such as few-shot classification and few-shot regression.
Related papers
- Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - Perturbing the Gradient for Alleviating Meta Overfitting [6.6157730528755065]
This paper proposes a number of solutions to tackle meta-overfitting on few-shot learning settings.
Our proposed approaches demonstrate improved generalization performance compared to state-of-the-art baselines for learning in a non-mutually exclusive task setting.
arXiv Detail & Related papers (2024-05-20T18:04:59Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation
Models [10.10825306582544]
We propose MEta Loss TRansformer (MELTR), a plug-in module that automatically and non-linearly combines various loss functions to aid learning the target task via auxiliary learning.
For evaluation, we apply our framework to various video foundation models (UniVL, Violet and All-in-one) and show significant performance gain on all four downstream tasks.
arXiv Detail & Related papers (2023-03-23T03:06:44Z) - Conflict-Averse Gradient Descent for Multi-task Learning [56.379937772617]
A major challenge in optimizing a multi-task model is the conflicting gradients.
We introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function.
CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss.
arXiv Detail & Related papers (2021-10-26T22:03:51Z) - ST-MAML: A Stochastic-Task based Method for Task-Heterogeneous
Meta-Learning [12.215288736524268]
This paper proposes a novel method, ST-MAML, that empowers model-agnostic meta-learning (MAML) to learn from multiple task distributions.
We demonstrate that ST-MAML matches or outperforms the state-of-the-art on two few-shot image classification tasks, one curve regression benchmark, one image completion problem, and a real-world temperature prediction application.
arXiv Detail & Related papers (2021-09-27T18:54:50Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.