Feature-based Style Randomization for Domain Generalization
- URL: http://arxiv.org/abs/2106.03171v1
- Date: Sun, 6 Jun 2021 16:34:44 GMT
- Title: Feature-based Style Randomization for Domain Generalization
- Authors: Yue Wang, Lei Qi, Yinghuan Shi, Yang Gao
- Abstract summary: Domain generalization (DG) aims to first learn a generic model on multiple source domains and then directly generalize to an arbitrary unseen target domain without any additional adaptions.
This paper develops a simple yet effective feature-based style randomization module to achieve feature-level augmentation.
Compared with existing image-level augmentation, our feature-level augmentation favors a more goal-oriented and sample-diverse way.
- Score: 27.15070576861912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a recent noticeable topic, domain generalization (DG) aims to first learn
a generic model on multiple source domains and then directly generalize to an
arbitrary unseen target domain without any additional adaption. In previous DG
models, by generating virtual data to supplement observed source domains, the
data augmentation based methods have shown its effectiveness. To simulate the
possible unseen domains, most of them enrich the diversity of original data via
image-level style transformation. However, we argue that the potential styles
are hard to be exhaustively illustrated and fully augmented due to the limited
referred styles, leading the diversity could not be always guaranteed. Unlike
image-level augmentation, we in this paper develop a simple yet effective
feature-based style randomization module to achieve feature-level augmentation,
which can produce random styles via integrating random noise into the original
style. Compared with existing image-level augmentation, our feature-level
augmentation favors a more goal-oriented and sample-diverse way. Furthermore,
to sufficiently explore the efficacy of the proposed module, we design a novel
progressive training strategy to enable all parameters of the network to be
fully trained. Extensive experiments on three standard benchmark datasets,
i.e., PACS, VLCS and Office-Home, highlight the superiority of our method
compared to the state-of-the-art methods.
Related papers
- Causality-inspired Latent Feature Augmentation for Single Domain Generalization [13.735443005394773]
Single domain generalization (Single-DG) intends to develop a generalizable model with only one single training domain to perform well on other unknown target domains.
Under the domain-hungry configuration, how to expand the coverage of source domain and find intrinsic causal features across different distributions is the key to enhancing the models' generalization ability.
We propose a novel causality-inspired latent feature augmentation method for Single-DG by learning the meta-knowledge of feature-level transformation based on causal learning and interventions.
arXiv Detail & Related papers (2024-06-10T02:42:25Z) - Diverse Intra- and Inter-Domain Activity Style Fusion for Cross-Person Generalization in Activity Recognition [8.850516669999292]
Existing domain generalization methods often face challenges in capturing intra- and inter-domain style diversity.
We propose a process conceptualized as domain padding to enrich the domain diversity.
We introduce a style-fused sampling strategy to enhance data generation diversity.
Our approach outperforms state-of-the-art DG methods in all human activity recognition tasks.
arXiv Detail & Related papers (2024-06-07T03:37:30Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Federated Domain Generalization for Image Recognition via Cross-Client
Style Transfer [60.70102634957392]
Domain generalization (DG) has been a hot topic in image recognition, with a goal to train a general model that can perform well on unseen domains.
In this paper, we propose a novel domain generalization method for image recognition through cross-client style transfer (CCST) without exchanging data samples.
Our method outperforms recent SOTA DG methods on two DG benchmarks (PACS, OfficeHome) and a large-scale medical image dataset (Camelyon17) in the FL setting.
arXiv Detail & Related papers (2022-10-03T13:15:55Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.