Test-Time Style Shifting: Handling Arbitrary Styles in Domain
Generalization
- URL: http://arxiv.org/abs/2306.04911v2
- Date: Tue, 13 Jun 2023 00:37:33 GMT
- Title: Test-Time Style Shifting: Handling Arbitrary Styles in Domain
Generalization
- Authors: Jungwuk Park, Dong-Jun Han, Soyeong Kim, Jaekyun Moon
- Abstract summary: In domain generalization (DG), the target domain is unknown when the model is being trained.
We propose test-time style shifting, which shifts the style of the test sample to the nearest source domain.
We also propose style balancing, which provides a great platform for maximizing the advantage of test-time style shifting.
- Score: 22.099003320482392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In domain generalization (DG), the target domain is unknown when the model is
being trained, and the trained model should successfully work on an arbitrary
(and possibly unseen) target domain during inference. This is a difficult
problem, and despite active studies in recent years, it remains a great
challenge. In this paper, we take a simple yet effective approach to tackle
this issue. We propose test-time style shifting, which shifts the style of the
test sample (that has a large style gap with the source domains) to the nearest
source domain that the model is already familiar with, before making the
prediction. This strategy enables the model to handle any target domains with
arbitrary style statistics, without additional model update at test-time.
Additionally, we propose style balancing, which provides a great platform for
maximizing the advantage of test-time style shifting by handling the
DG-specific imbalance issues. The proposed ideas are easy to implement and
successfully work in conjunction with various other DG schemes. Experimental
results on different datasets show the effectiveness of our methods.
Related papers
- Test-Time Domain Generalization for Face Anti-Spoofing [60.94384914275116]
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks.
We introduce a novel Test-Time Domain Generalization framework for FAS, which leverages the testing data to boost the model's generalizability.
Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space.
arXiv Detail & Related papers (2024-03-28T11:50:23Z) - Mitigating the Bias in the Model for Continual Test-Time Adaptation [32.33057968481597]
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains.
We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data.
This paper mitigates this issue to improve performance in the CTA scenario.
arXiv Detail & Related papers (2024-03-02T23:37:16Z) - DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation [43.842694540544194]
We propose to combine domain generalization and test-time adaptation to create a highly effective approach for reusing pre-trained models in unseen target domains.
We demonstrate that our method, combined with pre-trained whole-body CT models, can effectively segment MR images with high accuracy.
arXiv Detail & Related papers (2023-12-11T10:26:21Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - To Adapt or to Annotate: Challenges and Interventions for Domain
Adaptation in Open-Domain Question Answering [46.403929561360485]
We study end-to-end model performance of open-domain question answering (ODQA)
We find that not only do models fail to generalize, but high retrieval scores often still yield poor answer prediction accuracy.
We propose and evaluate several intervention methods which improve end-to-end answer F1 score by up to 24 points.
arXiv Detail & Related papers (2022-12-20T16:06:09Z) - Distributional Shift Adaptation using Domain-Specific Features [41.91388601229745]
In open-world scenarios, streaming big data can be Out-Of-Distribution (OOD)
We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not.
Our approach uses the most confidently predicted samples identified by an OOD base model to train a new model that effectively adapts to the target domain.
arXiv Detail & Related papers (2022-11-09T04:16:21Z) - Style Interleaved Learning for Generalizable Person Re-identification [69.03539634477637]
We propose a novel style interleaved learning (IL) framework for DG ReID training.
Unlike conventional learning strategies, IL incorporates two forward propagations and one backward propagation for each iteration.
We show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID.
arXiv Detail & Related papers (2022-07-07T07:41:32Z) - BMD: A General Class-balanced Multicentric Dynamic Prototype Strategy
for Source-free Domain Adaptation [74.93176783541332]
Source-free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to the unlabeled target domain without accessing the well-labeled source data.
To make up for the absence of source data, most existing methods introduced feature prototype based pseudo-labeling strategies.
We propose a general class-Balanced Multicentric Dynamic prototype strategy for the SFDA task.
arXiv Detail & Related papers (2022-04-06T13:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.