Exploiting Style Transfer-based Task Augmentation for Cross-Domain
Few-Shot Learning
- URL: http://arxiv.org/abs/2301.07927v2
- Date: Wed, 26 Apr 2023 01:44:28 GMT
- Title: Exploiting Style Transfer-based Task Augmentation for Cross-Domain
Few-Shot Learning
- Authors: Shuzhen Rao, Jun Huang, Zengming Tang
- Abstract summary: In cross-domain few-shot learning, the model trained on source domains struggles to generalize to the target domain.
We propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation.
The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability.
- Score: 4.678020383205135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In cross-domain few-shot learning, the core issue is that the model trained
on source domains struggles to generalize to the target domain, especially when
the domain shift is large. Motivated by the observation that the domain shift
between training tasks and target tasks usually can reflect in their style
variation, we propose Task Augmented Meta-Learning (TAML) to conduct style
transfer-based task augmentation to improve the domain generalization ability.
Firstly, Multi-task Interpolation (MTI) is introduced to fuse features from
multiple tasks with different styles, which makes more diverse styles
available. Furthermore, a novel task-augmentation strategy called Multi-Task
Style Transfer (MTST) is proposed to perform style transfer on existing tasks
to learn discriminative style-independent features. We also introduce a Feature
Modulation module (FM) to add random styles and improve generalization of the
model. The proposed TAML increases the diversity of styles of training tasks,
and contributes to training a model with better domain generalization ability.
The effectiveness is demonstrated via theoretical analysis and thorough
experiments on two popular cross-domain few-shot benchmarks.
Related papers
- HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization [69.33162366130887]
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
We introduce a novel method designed to supplement the model with domain-level and task-specific characteristics.
This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization.
arXiv Detail & Related papers (2024-01-18T04:23:21Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Multi-Domain Learning with Modulation Adapters [33.54630534228469]
Multi-domain learning aims to handle related tasks, such as image classification across multiple domains, simultaneously.
Modulation Adapters update the convolutional weights of the model in a multiplicative manner for each task.
Our approach yields excellent results, with accuracies that are comparable to or better than those of existing state-of-the-art approaches.
arXiv Detail & Related papers (2023-07-17T14:40:16Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Multiple Modes for Continual Learning [8.782809316491948]
Adapting model parameters to incoming streams of data is a crucial factor to deep learning scalability.
We formulate a trade-off between constructing multiple parameter modes and allocating tasks per mode.
We empirically demonstrate improvements over baseline continual learning strategies.
arXiv Detail & Related papers (2022-09-29T17:55:32Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Improving the Generalization of Meta-learning on Unseen Domains via
Adversarial Shift [3.1219977244201056]
We propose a model-agnostic shift layer to learn how to simulate the domain shift and generate pseudo tasks.
Based on the pseudo tasks, the meta-learning model can learn cross-domain meta-knowledge, which can generalize well on unseen domains.
arXiv Detail & Related papers (2021-07-23T07:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.