DAC-MR: Data Augmentation Consistency Based Meta-Regularization for
Meta-Learning
- URL: http://arxiv.org/abs/2305.07892v1
- Date: Sat, 13 May 2023 11:01:47 GMT
- Title: DAC-MR: Data Augmentation Consistency Based Meta-Regularization for
Meta-Learning
- Authors: Jun Shu, Xiang Yuan, Deyu Meng, Zongben Xu
- Abstract summary: We propose a meta-knowledge informed meta-learning (MKIML) framework to improve meta-learning.
We preliminarily integrate meta-knowledge into meta-objective via using an appropriate meta-regularization (MR) objective.
The proposed DAC-MR is hopeful to learn well-performing meta-models from training tasks with noisy, sparse or unavailable meta-data.
- Score: 55.733193075728096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta learning recently has been heavily researched and helped advance the
contemporary machine learning. However, achieving well-performing meta-learning
model requires a large amount of training tasks with high-quality meta-data
representing the underlying task generalization goal, which is sometimes
difficult and expensive to obtain for real applications. Current
meta-data-driven meta-learning approaches, however, are fairly hard to train
satisfactory meta-models with imperfect training tasks. To address this issue,
we suggest a meta-knowledge informed meta-learning (MKIML) framework to improve
meta-learning by additionally integrating compensated meta-knowledge into
meta-learning process. We preliminarily integrate meta-knowledge into
meta-objective via using an appropriate meta-regularization (MR) objective to
regularize capacity complexity of the meta-model function class to facilitate
better generalization on unseen tasks. As a practical implementation, we
introduce data augmentation consistency to encode invariance as meta-knowledge
for instantiating MR objective, denoted by DAC-MR. The proposed DAC-MR is
hopeful to learn well-performing meta-models from training tasks with noisy,
sparse or unavailable meta-data. We theoretically demonstrate that DAC-MR can
be treated as a proxy meta-objective used to evaluate meta-model without
high-quality meta-data. Besides, meta-data-driven meta-loss objective combined
with DAC-MR is capable of achieving better meta-level generalization. 10
meta-learning tasks with different network architectures and benchmarks
substantiate the capability of our DAC-MR on aiding meta-model learning. Fine
performance of DAC-MR are obtained across all settings, and are well-aligned
with our theoretical insights. This implies that our DAC-MR is
problem-agnostic, and hopeful to be readily applied to extensive meta-learning
problems and tasks.
Related papers
- Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning [61.8360232713375]
We propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning.
We present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning.
Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies.
arXiv Detail & Related papers (2024-09-27T18:22:22Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - On Fast Adversarial Robustness Adaptation in Model-Agnostic
Meta-Learning [100.14809391594109]
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning.
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning.
arXiv Detail & Related papers (2021-02-20T22:03:04Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - Structured Prediction for Conditional Meta-Learning [44.30857707980074]
We propose a new perspective on conditional meta-learning via structured prediction.
We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions.
Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
arXiv Detail & Related papers (2020-02-20T15:24:15Z) - Curriculum in Gradient-Based Meta-Reinforcement Learning [10.447238563837173]
We show that gradient-based meta-learners are sensitive to task distributions.
With the wrong curriculum, agents suffer the effects of meta-overfitting, shallow adaptation, and adaptation instability.
arXiv Detail & Related papers (2020-02-19T01:40:45Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.