Meta Learning on a Sequence of Imbalanced Domains with Difficulty
Awareness
- URL: http://arxiv.org/abs/2109.14120v1
- Date: Wed, 29 Sep 2021 00:53:09 GMT
- Title: Meta Learning on a Sequence of Imbalanced Domains with Difficulty
Awareness
- Authors: Zhenyi Wang, Tiehang Duan, Le Fang, Qiuling Suo and Mingchen Gao
- Abstract summary: A typical setting across current meta learning algorithms assumes a stationary task distribution during meta training.
We consider realistic scenarios where task distribution is highly imbalanced with domain labels unavailable in nature.
We propose a kernel-based method for domain change detection and a difficulty-aware memory management mechanism.
- Score: 6.648670454325191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recognizing new objects by learning from a few labeled examples in an
evolving environment is crucial to obtain excellent generalization ability for
real-world machine learning systems. A typical setting across current meta
learning algorithms assumes a stationary task distribution during meta
training. In this paper, we explore a more practical and challenging setting
where task distribution changes over time with domain shift. Particularly, we
consider realistic scenarios where task distribution is highly imbalanced with
domain labels unavailable in nature. We propose a kernel-based method for
domain change detection and a difficulty-aware memory management mechanism that
jointly considers the imbalanced domain size and domain importance to learn
across domains continuously. Furthermore, we introduce an efficient adaptive
task sampling method during meta training, which significantly reduces task
gradient variance with theoretical guarantees. Finally, we propose a
challenging benchmark with imbalanced domain sequences and varied domain
difficulty. We have performed extensive evaluations on the proposed benchmark,
demonstrating the effectiveness of our method. We made our code publicly
available.
Related papers
- Meta-TTT: A Meta-learning Minimax Framework For Test-Time Training [5.9631503543049895]
Test-time domain adaptation is a challenging task that aims to adapt a pre-trained model to limited, unlabeled target data during inference.
This paper introduces a meta-learning minimax framework for test-time training on batch normalization layers.
arXiv Detail & Related papers (2024-10-02T16:16:05Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Distributionally Adaptive Meta Reinforcement Learning [85.17284589483536]
We develop a framework for meta-RL algorithms that behave appropriately under test-time distribution shifts.
Our framework centers on an adaptive approach to distributional robustness that trains a population of meta-policies to be robust to varying levels of distribution shift.
We show how our framework allows for improved regret under distribution shift, and empirically show its efficacy on simulated robotics problems.
arXiv Detail & Related papers (2022-10-06T17:55:09Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z) - Continuous Domain Adaptation with Variational Domain-Agnostic Feature
Replay [78.7472257594881]
Learning in non-stationary environments is one of the biggest challenges in machine learning.
Non-stationarity can be caused by either task drift, or the domain drift.
We propose variational domain-agnostic feature replay, an approach that is composed of three components.
arXiv Detail & Related papers (2020-03-09T19:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.