Frustratingly Simple Domain Generalization via Image Stylization
- URL: http://arxiv.org/abs/2006.11207v2
- Date: Fri, 10 Jul 2020 15:13:11 GMT
- Title: Frustratingly Simple Domain Generalization via Image Stylization
- Authors: Nathan Somavarapu and Chih-Yao Ma and Zsolt Kira
- Abstract summary: Convolutional Neural Networks (CNNs) show impressive performance in the standard classification setting.
CNNs do not readily generalize to new domains with different statistics.
We demonstrate an extremely simple yet effective method, namely correcting this bias by augmenting the dataset with stylized images.
- Score: 27.239024949033496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional Neural Networks (CNNs) show impressive performance in the
standard classification setting where training and testing data are drawn
i.i.d. from a given domain. However, CNNs do not readily generalize to new
domains with different statistics, a setting that is simple for humans. In this
work, we address the Domain Generalization problem, where the classifier must
generalize to an unknown target domain. Inspired by recent works that have
shown a difference in biases between CNNs and humans, we demonstrate an
extremely simple yet effective method, namely correcting this bias by
augmenting the dataset with stylized images. In contrast with existing
stylization works, which use external data sources such as art, we further
introduce a method that is entirely in-domain using no such extra sources of
data. We provide a detailed analysis as to the mechanism by which the method
works, verifying our claim that it changes the shape/texture bias, and
demonstrate results surpassing or comparable to the state of the arts that
utilize much more complex methods.
Related papers
- SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Domain Generalization Emerges from Dreaming [10.066261691282016]
We propose a new framework to reduce the texture bias of a model by a novel optimization-based data augmentation, dubbed Stylized Dream.
Our framework utilizes adaptive instance normalization (AdaIN) to augment the style of an original image yet preserve the content.
We then adopt a regularization loss to predict consistent outputs between Stylized Dream and original images, which encourages the model to learn shape-based representations.
arXiv Detail & Related papers (2023-02-02T09:59:55Z) - Adversarial Style Augmentation for Domain Generalization [41.72506801753435]
We introduce a novel Adrial Style Augmentation (ASA) method, which explores broader style spaces by generating more effective statistics perturbation.
To facilitate the application of ASA, we design a simple yet effective module, namely AdvStyle, which instantiates the ASA method in a plug-and-play manner.
Our method significantly outperforms its competitors on the PACS dataset under the single source generalization setting.
arXiv Detail & Related papers (2023-01-30T03:52:16Z) - Domain Generalization with MixStyle [120.52367818581608]
Domain generalization aims to address this problem by learning from a set of source domains a model that is generalizable to any unseen domain.
Our method, termed MixStyle, is motivated by the observation that visual domain is closely related to image style.
MixStyle fits into mini-batch training perfectly and is extremely easy to implement.
arXiv Detail & Related papers (2021-04-05T16:58:09Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Keep it Simple: Image Statistics Matching for Domain Adaptation [0.0]
Domain Adaptation (DA) is a technique to maintain detection accuracy when only unlabeled images are available of the target domain.
Recent state-of-the-art methods try to reduce the domain gap using an adversarial training strategy.
We propose to align either color histograms or mean and covariance of the source images towards the target domain.
In comparison to recent methods, we achieve state-of-the-art performance using a much simpler procedure for the training.
arXiv Detail & Related papers (2020-05-26T07:32:09Z) - Unsupervised Intra-domain Adaptation for Semantic Segmentation through
Self-Supervision [73.76277367528657]
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation.
To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models.
We propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together.
arXiv Detail & Related papers (2020-04-16T15:24:11Z) - FDA: Fourier Domain Adaptation for Semantic Segmentation [82.4963423086097]
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain, but difficult to obtain in another.
Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
arXiv Detail & Related papers (2020-04-11T22:20:48Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - A simple baseline for domain adaptation using rotation prediction [17.539027866457673]
The goal is to adapt a model trained in one domain to another domain with scarce annotated data.
We propose a simple yet effective method based on self-supervised learning.
Our simple method achieves state-of-the-art results on semi-supervised domain adaptation on DomainNet dataset.
arXiv Detail & Related papers (2019-12-26T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.