Invariant Meta Learning for Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2301.11779v1
- Date: Thu, 26 Jan 2023 12:53:21 GMT
- Title: Invariant Meta Learning for Out-of-Distribution Generalization
- Authors: Penghao Jiang, Ke Xin, Zifeng Wang, Chunxi Li
- Abstract summary: In this paper, we propose invariant meta learning for out-of-distribution tasks.
Specifically, invariant optimal meta-initialization,and fast adapt to out-of-distribution tasks with regularization penalty.
- Score: 1.1718589131017048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deep learning techniques have illustrated their excellent capabilities
in many areas, but relies on large training data. Optimization-based
meta-learning train a model on a variety tasks, such that it can solve new
learning tasks using only a small number of training samples.However, these
methods assumes that training and test dataare identically and independently
distributed. To overcome such limitation, in this paper, we propose invariant
meta learning for out-of-distribution tasks. Specifically, invariant meta
learning find invariant optimal meta-initialization,and fast adapt to
out-of-distribution tasks with regularization penalty. Extensive experiments
demonstrate the effectiveness of our proposed invariant meta learning on
out-of-distribution few-shot tasks.
Related papers
- Towards Task Sampler Learning for Meta-Learning [37.02030832662183]
Meta-learning aims to learn general knowledge with diverse training tasks conducted from limited data, and then transfer it to new tasks.
It is commonly believed that increasing task diversity will enhance the generalization ability of meta-learning models.
This paper challenges this view through empirical and theoretical analysis.
arXiv Detail & Related papers (2023-07-18T01:53:18Z) - MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - Sufficient Invariant Learning for Distribution Shift [20.88069274935592]
We introduce a novel learning principle called the Sufficient Invariant Learning (SIL) framework.
SIL focuses on learning a sufficient subset of invariant features rather than relying on a single feature.
We propose a new algorithm, Adaptive Sharpness-aware Group Distributionally Robust Optimization (ASGDRO), to learn diverse invariant features by seeking common flat minima.
arXiv Detail & Related papers (2022-10-24T18:34:24Z) - Uncertainty-Aware Meta-Learning for Multimodal Task Distributions [3.7470451129384825]
We present UnLiMiTD (uncertainty-aware meta-learning for multimodal task distributions)
We take a probabilistic perspective and train a parametric, tuneable distribution over tasks on the meta-dataset.
We demonstrate that UnLiMiTD's predictions compare favorably to, and outperform in most cases, the standard baselines.
arXiv Detail & Related papers (2022-10-04T20:02:25Z) - Effective Adaptation in Multi-Task Co-Training for Unified Autonomous
Driving [103.745551954983]
In this paper, we investigate the transfer performance of various types of self-supervised methods, including MoCo and SimCLR, on three downstream tasks.
We find that their performances are sub-optimal or even lag far behind the single-task baseline.
We propose a simple yet effective pretrain-adapt-finetune paradigm for general multi-task training.
arXiv Detail & Related papers (2022-09-19T12:15:31Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - A Representation Learning Perspective on the Importance of
Train-Validation Splitting in Meta-Learning [14.720411598827365]
splitting data from each task into train and validation sets during meta-training.
We argue that the train-validation split encourages the learned representation to be low-rank without compromising on expressivity.
Since sample efficiency benefits from low-rankness, the splitting strategy will require very few samples to solve unseen test tasks.
arXiv Detail & Related papers (2021-06-29T17:59:33Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.