Unsupervised Representation Learning to Aid Semi-Supervised Meta
Learning
- URL: http://arxiv.org/abs/2310.13085v1
- Date: Thu, 19 Oct 2023 18:25:22 GMT
- Title: Unsupervised Representation Learning to Aid Semi-Supervised Meta
Learning
- Authors: Atik Faysal, Mohammad Rostami, Huaxia Wang, Avimanyu Sahoo, and Ryan
Antle
- Abstract summary: We propose a one-shot unsupervised meta-learning to learn latent representation of training samples.
A temperature-scaled cross-entropy loss is used in the inner loop of meta-learning to prevent overfitting.
The proposed method is model agnostic and can aid any meta-learning model to improve accuracy.
- Score: 16.534014215010757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning or meta-learning leverages the data scarcity problem in
machine learning. Traditionally, training data requires a multitude of samples
and labeling for supervised learning. To address this issue, we propose a
one-shot unsupervised meta-learning to learn the latent representation of the
training samples. We use augmented samples as the query set during the training
phase of the unsupervised meta-learning. A temperature-scaled cross-entropy
loss is used in the inner loop of meta-learning to prevent overfitting during
unsupervised learning. The learned parameters from this step are applied to the
targeted supervised meta-learning in a transfer-learning fashion for
initialization and fast adaptation with improved accuracy. The proposed method
is model agnostic and can aid any meta-learning model to improve accuracy. We
use model agnostic meta-learning (MAML) and relation network (RN) on Omniglot
and mini-Imagenet datasets to demonstrate the performance of the proposed
method. Furthermore, a meta-learning model with the proposed initialization can
achieve satisfactory accuracy with significantly fewer training samples.
Related papers
- What Do Learning Dynamics Reveal About Generalization in LLM Reasoning? [83.83230167222852]
We find that a model's generalization behavior can be effectively characterized by a training metric we call pre-memorization train accuracy.
By connecting a model's learning behavior to its generalization, pre-memorization train accuracy can guide targeted improvements to training strategies.
arXiv Detail & Related papers (2024-11-12T09:52:40Z) - Learning to Unlearn for Robust Machine Unlearning [6.488418950340473]
We introduce a novel Learning-to-Unlearn (LTU) framework to optimize the unlearning process.
LTU includes a meta-optimization scheme that facilitates models to effectively preserve generalizable knowledge.
We also introduce a Gradient Harmonization strategy to align the optimization trajectories for remembering and forgetting.
arXiv Detail & Related papers (2024-07-15T07:36:00Z) - Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [119.70303730341938]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.
ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.
We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - Generating meta-learning tasks to evolve parametric loss for
classification learning [1.1355370218310157]
In existing meta-learning approaches, learning tasks for training meta-models are usually collected from public datasets.
We propose a meta-learning approach based on randomly generated meta-learning tasks to obtain a parametric loss for classification learning based on big data.
arXiv Detail & Related papers (2021-11-20T13:07:55Z) - On Fast Adversarial Robustness Adaptation in Model-Agnostic
Meta-Learning [100.14809391594109]
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning.
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning.
arXiv Detail & Related papers (2021-02-20T22:03:04Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.