Cross Domain Generative Augmentation: Domain Generalization with Latent
Diffusion Models
- URL: http://arxiv.org/abs/2312.05387v1
- Date: Fri, 8 Dec 2023 21:52:00 GMT
- Title: Cross Domain Generative Augmentation: Domain Generalization with Latent
Diffusion Models
- Authors: Sobhan Hemati, Mahdi Beitollahi, Amir Hossein Estiri, Bassel Al Omari,
Xi Chen, Guojun Zhang
- Abstract summary: Cross Domain Generative Augmentation (CDGA) generates synthetic images to fill the gap between all domains.
We show that CDGA outperforms SOTA DG methods under the Domainbed benchmark.
- Score: 11.309433257851122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the huge effort in developing novel regularizers for Domain
Generalization (DG), adding simple data augmentation to the vanilla ERM which
is a practical implementation of the Vicinal Risk Minimization principle (VRM)
\citep{chapelle2000vicinal} outperforms or stays competitive with many of the
proposed regularizers. The VRM reduces the estimation error in ERM by replacing
the point-wise kernel estimates with a more precise estimation of true data
distribution that reduces the gap between data points \textbf{within each
domain}. However, in the DG setting, the estimation error of true data
distribution by ERM is mainly caused by the distribution shift \textbf{between
domains} which cannot be fully addressed by simple data augmentation techniques
within each domain. Inspired by this limitation of VRM, we propose a novel data
augmentation named Cross Domain Generative Augmentation (CDGA) that replaces
the pointwise kernel estimates in ERM with new density estimates in the
\textbf{vicinity of domain pairs} so that the gap between domains is further
reduced. To this end, CDGA, which is built upon latent diffusion models (LDM),
generates synthetic images to fill the gap between all domains and as a result,
reduces the non-iidness. We show that CDGA outperforms SOTA DG methods under
the Domainbed benchmark. To explain the effectiveness of CDGA, we generate more
than 5 Million synthetic images and perform extensive ablation studies
including data scaling laws, distribution visualization, domain shift
quantification, adversarial robustness, and loss landscape analysis.
Related papers
- Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation [32.58201185195226]
We propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN)
Our model achieves transfer learning from both global and local perspectives.
Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.
arXiv Detail & Related papers (2024-05-28T03:03:32Z) - Gradual Domain Adaptation: Theory and Algorithms [15.278170387810409]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.
We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - Domain Re-Modulation for Few-Shot Generative Domain Adaptation [71.47730150327818]
Generative Domain Adaptation (GDA) involves transferring a pre-trained generator from one domain to a new domain using only a few reference images.
Inspired by the way human brains acquire knowledge in new domains, we present an innovative generator structure called Domain Re-Modulation (DoRM)
DoRM not only meets the criteria of high quality, large synthesis diversity, and cross-domain consistency, but also incorporates memory and domain association.
arXiv Detail & Related papers (2023-02-06T03:55:35Z) - DAPDAG: Domain Adaptation via Perturbed DAG Reconstruction [78.76115370275733]
We learn an auto-encoder that undertakes inference on population statistics given features and reconstruct a directed acyclic graph (DAG) as an auxiliary task.
The underlying DAG structure is assumed invariant among observed variables whose conditional distributions are allowed to vary across domains led by a latent environmental variable $E$.
We train the encoder and decoder jointly in an end-to-end manner and conduct experiments on synthetic and real datasets with mixed variables.
arXiv Detail & Related papers (2022-08-02T11:43:03Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Unsupervised Domain Adaptation for Cardiac Segmentation: Towards
Structure Mutual Information Maximization [0.8959391124399926]
Unsupervised domain adaptation approaches have succeeded in various medical image segmentation tasks.
UDA-VAE++ is an unsupervised domain adaptation framework for cardiac segmentation with a compact loss function lower bound.
Our model outperforms previous state-of-the-art qualitatively and quantitatively.
arXiv Detail & Related papers (2022-04-20T09:10:18Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy
Minimisation for Multi-modal Cardiac Image Segmentation [10.417009344120917]
We present a novel UDA method for multi-modal cardiac image segmentation.
The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces.
We validated our method on two cardiac datasets by adapting from the annotated source domain to the unannotated target domain.
arXiv Detail & Related papers (2021-03-15T08:59:44Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.