More is Better: A Novel Multi-view Framework for Domain Generalization
- URL: http://arxiv.org/abs/2112.12329v1
- Date: Thu, 23 Dec 2021 02:51:35 GMT
- Title: More is Better: A Novel Multi-view Framework for Domain Generalization
- Authors: Jian Zhang, Lei Qi, Yinghuan Shi, Yang Gao
- Abstract summary: Key issue of domain generalization (DG) is how to prevent overfitting to the observed source domains.
By treating tasks and images as different views, we propose a novel multi-view DG framework.
In test stage, to alleviate unstable prediction, we utilize multiple augmented images to yield multi-view prediction.
- Score: 28.12350681444117
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to generalize the model trained in source domains to unseen target
domains, domain generalization (DG) has attracted lots of attention recently.
The key issue of DG is how to prevent overfitting to the observed source
domains because target domain is unavailable during training. We investigate
that overfitting not only causes the inferior generalization ability to unseen
target domains but also leads unstable prediction in the test stage. In this
paper, we observe that both sampling multiple tasks in training stage and
generating augmented images in test stage largely benefit generalization
performance. Thus, by treating tasks and images as different views, we propose
a novel multi-view DG framework. Specifically, in training stage, to enhance
generalization ability, we develop a multi-view regularized meta-learning
algorithm that employs multiple tasks to produce a suitable optimization
direction during updating model. In test stage, to alleviate unstable
prediction, we utilize multiple augmented images to yield multi-view
prediction, which significantly promotes model reliability via fusing the
results of different views of a test image. Extensive experiments on three
benchmark datasets validate our method outperforms several state-of-the-art
approaches.
Related papers
- LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization [61.16890890570814]
Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains.
This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME) that aims to make the target model an expert in all source domains to improve DG.
arXiv Detail & Related papers (2024-10-22T13:44:10Z) - DG-TTA: Out-of-domain medical image segmentation through Domain Generalization and Test-Time Adaptation [43.842694540544194]
We propose to combine domain generalization and test-time adaptation to create a highly effective approach for reusing pre-trained models in unseen target domains.
We demonstrate that our method, combined with pre-trained whole-body CT models, can effectively segment MR images with high accuracy.
arXiv Detail & Related papers (2023-12-11T10:26:21Z) - Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation [72.70876977882882]
Domain shift is a common problem in clinical applications, where the training images (source domain) and the test images (target domain) are under different distributions.
We propose a novel method for Few-Shot Unsupervised Domain Adaptation (FSUDA), where only a limited number of unlabeled target domain samples are available for training.
arXiv Detail & Related papers (2023-09-03T16:02:01Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Domain Adaptive and Generalizable Network Architectures and Training
Strategies for Semantic Image Segmentation [108.33885637197614]
Unsupervised domain adaptation (UDA) and domain generalization (DG) enable machine learning models trained on a source domain to perform well on unlabeled or unseen target domains.
We propose HRDA, a multi-resolution framework for UDA&DG, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention.
arXiv Detail & Related papers (2023-04-26T15:18:45Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Generalizable Model-agnostic Semantic Segmentation via Target-specific
Normalization [24.14272032117714]
We propose a novel domain generalization framework for the generalizable semantic segmentation task.
We exploit the model-agnostic learning to simulate the domain shift problem.
Considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme.
arXiv Detail & Related papers (2020-03-27T09:25:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.