Feature Stylization and Domain-aware Contrastive Learning for Domain
Generalization
- URL: http://arxiv.org/abs/2108.08596v1
- Date: Thu, 19 Aug 2021 10:04:01 GMT
- Title: Feature Stylization and Domain-aware Contrastive Learning for Domain
Generalization
- Authors: Seogkyu Jeon, Kibeom Hong, Pilhyeon Lee, Jewook Lee, and Hyeran Byun
- Abstract summary: Domain generalization aims to enhance the model against domain shift without accessing the target domain.
We propose a novel framework where feature statistics are utilized for stylizing original features to ones with novel domain properties.
We achieve the feature consistency with the proposed domain-aware supervised contrastive loss.
- Score: 10.027279853737511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization aims to enhance the model robustness against domain
shift without accessing the target domain. Since the available source domains
for training are limited, recent approaches focus on generating samples of
novel domains. Nevertheless, they either struggle with the optimization problem
when synthesizing abundant domains or cause the distortion of class semantics.
To these ends, we propose a novel domain generalization framework where feature
statistics are utilized for stylizing original features to ones with novel
domain properties. To preserve class information during stylization, we first
decompose features into high and low frequency components. Afterward, we
stylize the low frequency components with the novel domain styles sampled from
the manipulated statistics, while preserving the shape cues in high frequency
ones. As the final step, we re-merge both components to synthesize novel domain
features. To enhance domain robustness, we utilize the stylized features to
maintain the model consistency in terms of features as well as outputs. We
achieve the feature consistency with the proposed domain-aware supervised
contrastive loss, which ensures domain invariance while increasing class
discriminability. Experimental results demonstrate the effectiveness of the
proposed feature stylization and the domain-aware contrastive loss. Through
quantitative comparisons, we verify the lead of our method upon existing
state-of-the-art methods on two benchmarks, PACS and Office-Home.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - TACIT: A Target-Agnostic Feature Disentanglement Framework for
Cross-Domain Text Classification [17.19214732926589]
Cross-domain text classification aims to transfer models from label-rich source domains to label-poor target domains.
This paper proposes TACIT, a target domain feature disentanglement framework which adaptively decouples robust and unrobust features.
Our framework achieves comparable results to state-of-the-art baselines while utilizing only source domain data.
arXiv Detail & Related papers (2023-12-25T02:52:36Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Cyclically Disentangled Feature Translation for Face Anti-spoofing [61.70377630461084]
We propose a novel domain adaptation method called cyclically disentangled feature translation network (CDFTN)
CDFTN generates pseudo-labeled samples that possess: 1) source domain-invariant liveness features and 2) target domain-specific content features, which are disentangled through domain adversarial training.
A robust classifier is trained based on the synthetic pseudo-labeled images under the supervision of source domain labels.
arXiv Detail & Related papers (2022-12-07T14:12:34Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Adapting Segmentation Networks to New Domains by Disentangling Latent
Representations [14.050836886292869]
Domain adaptation approaches have come into play to transfer knowledge acquired on a label-abundant source domain to a related label-scarce target domain.
We propose a novel performance metric to capture the relative efficacy of an adaptation strategy compared to supervised training.
arXiv Detail & Related papers (2021-08-06T09:43:07Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.