Augmentation-based Domain Generalization for Semantic Segmentation
- URL: http://arxiv.org/abs/2304.12122v1
- Date: Mon, 24 Apr 2023 14:26:53 GMT
- Title: Augmentation-based Domain Generalization for Semantic Segmentation
- Authors: Manuel Schwonberg, Fadoua El Bouazati, Nico M. Schmidt, Hanno
Gottschalk
- Abstract summary: Unsupervised Domain Adaptation (UDA) and domain generalization (DG) aim to tackle the lack of generalization of Deep Neural Networks (DNNs) towards unseen domains.
We study the in- and out-of-domain generalization capabilities of simple, rule-based image augmentations like blur, noise, color jitter and many more.
Our experiments confirm the common scientific standard that combination of multiple different augmentations out-performs single augmentations.
- Score: 2.179313476241343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) and domain generalization (DG) are two
research areas that aim to tackle the lack of generalization of Deep Neural
Networks (DNNs) towards unseen domains. While UDA methods have access to
unlabeled target images, domain generalization does not involve any target data
and only learns generalized features from a source domain. Image-style
randomization or augmentation is a popular approach to improve network
generalization without access to the target domain. Complex methods are often
proposed that disregard the potential of simple image augmentations for
out-of-domain generalization. For this reason, we systematically study the in-
and out-of-domain generalization capabilities of simple, rule-based image
augmentations like blur, noise, color jitter and many more. Based on a full
factorial design of experiment design we provide a systematic statistical
evaluation of augmentations and their interactions. Our analysis provides both,
expected and unexpected, outcomes. Expected, because our experiments confirm
the common scientific standard that combination of multiple different
augmentations out-performs single augmentations. Unexpected, because combined
augmentations perform competitive to state-of-the-art domain generalization
approaches, while being significantly simpler and without training overhead. On
the challenging synthetic-to-real domain shift between Synthia and Cityscapes
we reach 39.5% mIoU compared to 40.9% mIoU of the best previous work. When
additionally employing the recent vision transformer architecture DAFormer we
outperform these benchmarks with a performance of 44.2% mIoU
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Unified Domain Adaptive Semantic Segmentation [96.74199626935294]
Unsupervised Adaptive Domain Semantic (UDA-SS) aims to transfer the supervision from a labeled source domain to an unlabeled target domain.
We propose a Quad-directional Mixup (QuadMix) method, characterized by tackling distinct point attributes and feature inconsistencies.
Our method outperforms the state-of-the-art works by large margins on four challenging UDA-SS benchmarks.
arXiv Detail & Related papers (2023-11-22T09:18:49Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - Rethinking Data Augmentation for Single-source Domain Generalization in
Medical Image Segmentation [19.823497430391413]
We rethink the data augmentation strategy for single-source domain generalization in medical image segmentation.
Motivated by the class-level representation invariance and style mutability of medical images, we hypothesize that unseen target data can be sampled from a linear combination of $C$ random variables.
We implement such strategy with constrained B$acuterm e$zier transformation on both global and local (i.e. class-level) regions.
As an important contribution, we prove theoretically that our proposed augmentation can lead to an upper bound of the risk generalization on the unseen target domain.
arXiv Detail & Related papers (2022-11-27T12:05:33Z) - AADG: Automatic Augmentation for Domain Generalization on Retinal Image
Segmentation [1.0452185327816181]
We propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG)
Our AADG framework can effectively sample data augmentation policies that generate novel domains.
Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches.
arXiv Detail & Related papers (2022-07-27T02:26:01Z) - Improving Diversity with Adversarially Learned Transformations for
Domain Generalization [81.26960899663601]
We present a novel framework that uses adversarially learned transformations (ALT) using a neural network to model plausible, yet hard image transformations.
We show that ALT can naturally work with existing diversity modules to produce highly distinct, and large transformations of the source domain leading to state-of-the-art performance.
arXiv Detail & Related papers (2022-06-15T18:05:24Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Reappraising Domain Generalization in Neural Networks [8.06370138649329]
Domain generalization (DG) of machine learning algorithms is defined as their ability to learn a domain agnostic hypothesis from multiple training distributions.
We find that a straightforward Empirical Risk Minimization (ERM) baseline consistently outperforms existing DG methods.
We propose a classwise-DG formulation, where for each class, we randomly select one of the domains and keep it aside for testing.
arXiv Detail & Related papers (2021-10-15T10:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.