Domain Generalization Emerges from Dreaming
- URL: http://arxiv.org/abs/2302.00980v1
- Date: Thu, 2 Feb 2023 09:59:55 GMT
- Title: Domain Generalization Emerges from Dreaming
- Authors: Hwan Heo, Youngjin Oh, Jaewon Lee, Hyunwoo J. Kim
- Abstract summary: We propose a new framework to reduce the texture bias of a model by a novel optimization-based data augmentation, dubbed Stylized Dream.
Our framework utilizes adaptive instance normalization (AdaIN) to augment the style of an original image yet preserve the content.
We then adopt a regularization loss to predict consistent outputs between Stylized Dream and original images, which encourages the model to learn shape-based representations.
- Score: 10.066261691282016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent studies have proven that DNNs, unlike human vision, tend to exploit
texture information rather than shape. Such texture bias is one of the factors
for the poor generalization performance of DNNs. We observe that the texture
bias negatively affects not only in-domain generalization but also
out-of-distribution generalization, i.e., Domain Generalization. Motivated by
the observation, we propose a new framework to reduce the texture bias of a
model by a novel optimization-based data augmentation, dubbed Stylized Dream.
Our framework utilizes adaptive instance normalization (AdaIN) to augment the
style of an original image yet preserve the content. We then adopt a
regularization loss to predict consistent outputs between Stylized Dream and
original images, which encourages the model to learn shape-based
representations. Extensive experiments show that the proposed method achieves
state-of-the-art performance in out-of-distribution settings on public
benchmark datasets: PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet.
Related papers
- GeoWizard: Unleashing the Diffusion Priors for 3D Geometry Estimation from a Single Image [94.56927147492738]
We introduce GeoWizard, a new generative foundation model designed for estimating geometric attributes from single images.
We show that leveraging diffusion priors can markedly improve generalization, detail preservation, and efficiency in resource usage.
We propose a simple yet effective strategy to segregate the complex data distribution of various scenes into distinct sub-distributions.
arXiv Detail & Related papers (2024-03-18T17:50:41Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - CNN Feature Map Augmentation for Single-Source Domain Generalization [6.053629733936548]
Domain Generalization (DG) has gained significant traction during the past few years.
The goal in DG is to produce models which continue to perform well when presented with data distributions different from the ones available during training.
We propose an alternative regularization technique for convolutional neural network architectures in the single-source DG image classification setting.
arXiv Detail & Related papers (2023-05-26T08:48:17Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - Normalization Perturbation: A Simple Domain Generalization Method for
Real-World Domain Shifts [133.99270341855728]
Real-world domain styles can vary substantially due to environment changes and sensor noises.
Deep models only know the training domain style.
We propose Normalization Perturbation to overcome this domain style overfitting problem.
arXiv Detail & Related papers (2022-11-08T17:36:49Z) - Combining Discrete Choice Models and Neural Networks through Embeddings:
Formulation, Interpretability and Performance [10.57079240576682]
This study proposes a novel approach that combines theory and data-driven choice models using Artificial Neural Networks (ANNs)
In particular, we use continuous vector representations, called embeddings, for encoding categorical or discrete explanatory variables.
Our models deliver state-of-the-art predictive performance, outperforming existing ANN-based models while drastically reducing the number of required network parameters.
arXiv Detail & Related papers (2021-09-24T15:55:31Z) - Frustratingly Simple Domain Generalization via Image Stylization [27.239024949033496]
Convolutional Neural Networks (CNNs) show impressive performance in the standard classification setting.
CNNs do not readily generalize to new domains with different statistics.
We demonstrate an extremely simple yet effective method, namely correcting this bias by augmenting the dataset with stylized images.
arXiv Detail & Related papers (2020-06-19T16:20:40Z) - Generalizable Model-agnostic Semantic Segmentation via Target-specific
Normalization [24.14272032117714]
We propose a novel domain generalization framework for the generalizable semantic segmentation task.
We exploit the model-agnostic learning to simulate the domain shift problem.
Considering the data-distribution discrepancy between seen source and unseen target domains, we develop the target-specific normalization scheme.
arXiv Detail & Related papers (2020-03-27T09:25:19Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.