Domain Generalisation via Domain Adaptation: An Adversarial Fourier
Amplitude Approach
- URL: http://arxiv.org/abs/2302.12047v1
- Date: Thu, 23 Feb 2023 14:19:07 GMT
- Title: Domain Generalisation via Domain Adaptation: An Adversarial Fourier
Amplitude Approach
- Authors: Minyoung Kim, Da Li, Timothy Hospedales
- Abstract summary: We adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain.
On the DomainBedNet dataset, the proposed approach yields significantly improved domain generalisation performance.
- Score: 13.642506915023871
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the domain generalisation (DG) problem by posing it as a domain
adaptation (DA) task where we adversarially synthesise the worst-case target
domain and adapt a model to that worst-case domain, thereby improving the
model's robustness. To synthesise data that is challenging yet
semantics-preserving, we generate Fourier amplitude images and combine them
with source domain phase images, exploiting the widely believed conjecture from
signal processing that amplitude spectra mainly determines image style, while
phase data mainly captures image semantics. To synthesise a worst-case domain
for adaptation, we train the classifier and the amplitude generator
adversarially. Specifically, we exploit the maximum classifier discrepancy
(MCD) principle from DA that relates the target domain performance to the
discrepancy of classifiers in the model hypothesis space. By Bayesian
hypothesis modeling, we express the model hypothesis space effectively as a
posterior distribution over classifiers given the source domains, making
adversarial MCD minimisation feasible. On the DomainBed benchmark including the
large-scale DomainNet dataset, the proposed approach yields significantly
improved domain generalisation performance over the state-of-the-art.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Amplitude Spectrum Transformation for Open Compound Domain Adaptive
Semantic Segmentation [62.68759523116924]
Open compound domain adaptation (OCDA) has emerged as a practical adaptation setting.
We propose a novel feature space Amplitude Spectrum Transformation (AST)
arXiv Detail & Related papers (2022-02-09T05:40:34Z) - A Fourier-based Framework for Domain Generalization [82.54650565298418]
Domain generalization aims at tackling this problem by learning transferable knowledge from multiple source domains in order to generalize to unseen target domains.
This paper introduces a novel Fourier-based perspective for domain generalization.
Experiments on three benchmarks have demonstrated that the proposed method is able to achieve state-of-the-arts performance for domain generalization.
arXiv Detail & Related papers (2021-05-24T06:50:30Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Cross-Domain Latent Modulation for Variational Transfer Learning [1.9212368803706577]
We propose a cross-domain latent modulation mechanism within a variational autoencoders (VAE) framework to enable improved transfer learning.
We apply the proposed model to a number of transfer learning tasks including unsupervised domain adaptation and image-to-image translation.
arXiv Detail & Related papers (2020-12-21T22:45:00Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.