Test-time Fourier Style Calibration for Domain Generalization
- URL: http://arxiv.org/abs/2205.06427v1
- Date: Fri, 13 May 2022 02:43:03 GMT
- Title: Test-time Fourier Style Calibration for Domain Generalization
- Authors: Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu
- Abstract summary: We argue that reducing the gap between source and target styles can boost models' generalizability.
To solve the dilemma of having no access to the target domain during training, we introduce Test-time Style (TF-Cal)
We present an effective technique to Augment Amplitude Features (AAF) to complement TF-Cal.
- Score: 47.314071215317995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The topic of generalizing machine learning models learned on a collection of
source domains to unknown target domains is challenging. While many domain
generalization (DG) methods have achieved promising results, they primarily
rely on the source domains at train-time without manipulating the target
domains at test-time. Thus, it is still possible that those methods can overfit
to source domains and perform poorly on target domains. Driven by the
observation that domains are strongly related to styles, we argue that reducing
the gap between source and target styles can boost models' generalizability. To
solve the dilemma of having no access to the target domain during training, we
introduce Test-time Fourier Style Calibration (TF-Cal) for calibrating the
target domain style on the fly during testing. To access styles, we utilize
Fourier transformation to decompose features into amplitude (style) features
and phase (semantic) features. Furthermore, we present an effective technique
to Augment Amplitude Features (AAF) to complement TF-Cal. Extensive experiments
on several popular DG benchmarks and a segmentation dataset for medical images
demonstrate that our method outperforms state-of-the-art methods.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Phrase Grounding-based Style Transfer for Single-Domain Generalized
Object Detection [109.58348694132091]
Single-domain generalized object detection aims to enhance a model's generalizability to multiple unseen target domains.
This is a practical yet challenging task as it requires the model to address domain shift without incorporating target domain data into training.
We propose a novel phrase grounding-based style transfer approach for the task.
arXiv Detail & Related papers (2024-02-02T10:48:43Z) - TACIT: A Target-Agnostic Feature Disentanglement Framework for
Cross-Domain Text Classification [17.19214732926589]
Cross-domain text classification aims to transfer models from label-rich source domains to label-poor target domains.
This paper proposes TACIT, a target domain feature disentanglement framework which adaptively decouples robust and unrobust features.
Our framework achieves comparable results to state-of-the-art baselines while utilizing only source domain data.
arXiv Detail & Related papers (2023-12-25T02:52:36Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - Robust Domain-Free Domain Generalization with Class-aware Alignment [4.442096198968069]
Domain-Free Domain Generalization (DFDG) is a model-agnostic method to achieve better generalization performance on the unseen test domain.
DFDG uses novel strategies to learn domain-invariant class-discriminative features.
It obtains competitive performance on both time series sensor and image classification public datasets.
arXiv Detail & Related papers (2021-02-17T17:46:06Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.