Improving the Generalization of Meta-learning on Unseen Domains via
Adversarial Shift
- URL: http://arxiv.org/abs/2107.11056v1
- Date: Fri, 23 Jul 2021 07:29:30 GMT
- Title: Improving the Generalization of Meta-learning on Unseen Domains via
Adversarial Shift
- Authors: Pinzhuo Tian, Yao Gao
- Abstract summary: We propose a model-agnostic shift layer to learn how to simulate the domain shift and generate pseudo tasks.
Based on the pseudo tasks, the meta-learning model can learn cross-domain meta-knowledge, which can generalize well on unseen domains.
- Score: 3.1219977244201056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning provides a promising way for learning to efficiently learn and
achieves great success in many applications. However, most meta-learning
literature focuses on dealing with tasks from a same domain, making it brittle
to generalize to tasks from the other unseen domains. In this work, we address
this problem by simulating tasks from the other unseen domains to improve the
generalization and robustness of meta-learning method. Specifically, we propose
a model-agnostic shift layer to learn how to simulate the domain shift and
generate pseudo tasks, and develop a new adversarial learning-to-learn
mechanism to train it. Based on the pseudo tasks, the meta-learning model can
learn cross-domain meta-knowledge, which can generalize well on unseen domains.
We conduct extensive experiments under the domain generalization setting.
Experimental results demonstrate that the proposed shift layer is applicable to
various meta-learning frameworks. Moreover, our method also leads to
state-of-the-art performance on different cross-domain few-shot classification
benchmarks and produces good results on cross-domain few-shot regression.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Exploiting Style Transfer-based Task Augmentation for Cross-Domain
Few-Shot Learning [4.678020383205135]
In cross-domain few-shot learning, the model trained on source domains struggles to generalize to the target domain.
We propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation.
The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability.
arXiv Detail & Related papers (2023-01-19T07:32:23Z) - Learn what matters: cross-domain imitation learning with task-relevant
embeddings [77.34726150561087]
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent.
We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge.
arXiv Detail & Related papers (2022-09-24T21:56:58Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Cross-Domain Few-Shot Classification via Adversarial Task Augmentation [16.112554109446204]
Few-shot classification aims to recognize unseen classes with few labeled samples from each class.
Many meta-learning models for few-shot classification elaborately design various task-shared inductive bias (meta-knowledge) to solve such tasks.
In this work, we aim to improve the robustness of the inductive bias through task augmentation.
arXiv Detail & Related papers (2021-04-29T14:51:53Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and
Architectures [61.73533544385352]
We propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data.
As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize heterogeneous tasks and architectures.
arXiv Detail & Related papers (2020-06-13T02:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.