A Fourier-based Framework for Domain Generalization
- URL: http://arxiv.org/abs/2105.11120v1
- Date: Mon, 24 May 2021 06:50:30 GMT
- Title: A Fourier-based Framework for Domain Generalization
- Authors: Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang, Qi Tian
- Abstract summary: Domain generalization aims at tackling this problem by learning transferable knowledge from multiple source domains in order to generalize to unseen target domains.
This paper introduces a novel Fourier-based perspective for domain generalization.
Experiments on three benchmarks have demonstrated that the proposed method is able to achieve state-of-the-arts performance for domain generalization.
- Score: 82.54650565298418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep neural networks suffer from performance degradation when
evaluated on testing data under different distributions from training data.
Domain generalization aims at tackling this problem by learning transferable
knowledge from multiple source domains in order to generalize to unseen target
domains. This paper introduces a novel Fourier-based perspective for domain
generalization. The main assumption is that the Fourier phase information
contains high-level semantics and is not easily affected by domain shifts. To
force the model to capture phase information, we develop a novel Fourier-based
data augmentation strategy called amplitude mix which linearly interpolates
between the amplitude spectrums of two images. A dual-formed consistency loss
called co-teacher regularization is further introduced between the predictions
induced from original and augmented images. Extensive experiments on three
benchmarks have demonstrated that the proposed method is able to achieve
state-of-the-arts performance for domain generalization.
Related papers
- Domain Generalization with Fourier Transform and Soft Thresholding [10.50210846364862]
Domain generalization aims to train models on multiple source domains so that they can generalize well to unseen target domains.
To overcome this limitation, we introduce a soft-thresholding function in the Fourier domain.
The innovative nature of the soft thresholding fused with Fourier-transform-based domain generalization improves neural network models' performance.
arXiv Detail & Related papers (2023-09-18T15:28:09Z) - FAN-Net: Fourier-Based Adaptive Normalization For Cross-Domain Stroke
Lesion Segmentation [17.150527504559594]
We propose a novel FAN-Net, a U-Net-based segmentation network incorporated with a Fourier-based adaptive normalization (FAN)
The experimental results on the ATLAS dataset, which consists of MR images from 9 sites, show the superior performance of the proposed FAN-Net compared with baseline methods.
arXiv Detail & Related papers (2023-04-23T06:58:21Z) - Domain Generalisation via Domain Adaptation: An Adversarial Fourier
Amplitude Approach [13.642506915023871]
We adversarially synthesise the worst-case target domain and adapt a model to that worst-case domain.
On the DomainBedNet dataset, the proposed approach yields significantly improved domain generalisation performance.
arXiv Detail & Related papers (2023-02-23T14:19:07Z) - Synthetic-to-Real Domain Generalized Semantic Segmentation for 3D Indoor
Point Clouds [69.64240235315864]
This paper introduces the synthetic-to-real domain generalization setting to this task.
The domain gap between synthetic and real-world point cloud data mainly lies in the different layouts and point patterns.
Experiments on the synthetic-to-real benchmark demonstrate that both CINMix and multi-prototypes can narrow the distribution gap.
arXiv Detail & Related papers (2022-12-09T05:07:43Z) - Deep Fourier Up-Sampling [100.59885545206744]
Up-sampling in the Fourier domain is more challenging as it does not follow such a local property.
We propose a theoretically sound Deep Fourier Up-Sampling (FourierUp) to solve these issues.
arXiv Detail & Related papers (2022-10-11T06:17:31Z) - Explicit Use of Fourier Spectrum in Generative Adversarial Networks [0.0]
We show that there is a dissimilarity between the spectrum of authentic images and fake ones.
We propose a new model to reduce the discrepancies between the spectrum of the actual and fake images.
We experimentally show promising improvements in the quality of the generated images.
arXiv Detail & Related papers (2022-08-02T06:26:44Z) - Domain Generalization via Frequency-based Feature Disentanglement and
Interaction [23.61154228837516]
Domain generalization aims at mining domain-irrelevant knowledge from multiple source domains.
We introduce (i) an encoder-decoder structure for high-frequency and low-frequency feature disentangling, (ii) an information interaction mechanism that ensures helpful knowledge from both parts can cooperate effectively.
The proposed method obtains state-of-the-art results on three widely used domain generalization benchmarks.
arXiv Detail & Related papers (2022-01-20T07:42:12Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.