Single Domain Generalization via Normalised Cross-correlation Based
Convolutions
- URL: http://arxiv.org/abs/2307.05901v1
- Date: Wed, 12 Jul 2023 04:15:36 GMT
- Title: Single Domain Generalization via Normalised Cross-correlation Based
Convolutions
- Authors: WeiQin Chuah, Ruwan Tennakoon, Reza Hoseinnezhad, David Suter, Alireza
Bab-Hadiashar
- Abstract summary: Single Domain Generalization aims to train robust models using data from a single source.
We propose a novel operator called XCNorm that computes the normalized cross-correlation between weights and an input feature patch.
We show that deep neural networks composed of this operator are robust to common semantic distribution shifts.
- Score: 14.306250516592304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning techniques often perform poorly in the presence of domain
shift, where the test data follows a different distribution than the training
data. The most practically desirable approach to address this issue is Single
Domain Generalization (S-DG), which aims to train robust models using data from
a single source. Prior work on S-DG has primarily focused on using data
augmentation techniques to generate diverse training data. In this paper, we
explore an alternative approach by investigating the robustness of linear
operators, such as convolution and dense layers commonly used in deep learning.
We propose a novel operator called XCNorm that computes the normalized
cross-correlation between weights and an input feature patch. This approach is
invariant to both affine shifts and changes in energy within a local feature
patch and eliminates the need for commonly used non-linear activation
functions. We show that deep neural networks composed of this operator are
robust to common semantic distribution shifts. Furthermore, our empirical
results on single-domain generalization benchmarks demonstrate that our
proposed technique performs comparably to the state-of-the-art methods.
Related papers
- First-Order Manifold Data Augmentation for Regression Learning [4.910937238451485]
We introduce FOMA: a new data-driven domain-independent data augmentation method.
We evaluate FOMA on in-distribution generalization and out-of-distribution benchmarks, and we show that it improves the generalization of several neural architectures.
arXiv Detail & Related papers (2024-06-16T12:35:05Z) - Out of the Ordinary: Spectrally Adapting Regression for Covariate Shift [12.770658031721435]
We propose a method for adapting the weights of the last layer of a pre-trained neural regression model to perform better on input data originating from a different distribution.
We demonstrate how this lightweight spectral adaptation procedure can improve out-of-distribution performance for synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-29T04:15:58Z) - Open Domain Generalization with a Single Network by Regularization
Exploiting Pre-trained Features [37.518025833882334]
Open Domain Generalization (ODG) is a challenging task as it deals with distribution shifts and category shifts.
Previous work has used multiple source-specific networks, which involve a high cost.
This paper proposes a method that can handle ODG using only a single network.
arXiv Detail & Related papers (2023-12-08T16:22:10Z) - Geometrically Aligned Transfer Encoder for Inductive Transfer in
Regression Tasks [5.038936775643437]
We propose a novel transfer technique based on differential geometry, namely the Geometrically Aligned Transfer (GATE)
We find a proper diffeomorphism between pairs of tasks to ensure that every arbitrary point maps to a locally flat coordinate in the overlapping region, allowing the transfer of knowledge from the source to the target data.
GATE outperforms conventional methods and exhibits stable behavior in both the latent space and extrapolation regions for various molecular graph datasets.
arXiv Detail & Related papers (2023-10-10T07:11:25Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - CNN Feature Map Augmentation for Single-Source Domain Generalization [6.053629733936548]
Domain Generalization (DG) has gained significant traction during the past few years.
The goal in DG is to produce models which continue to perform well when presented with data distributions different from the ones available during training.
We propose an alternative regularization technique for convolutional neural network architectures in the single-source DG image classification setting.
arXiv Detail & Related papers (2023-05-26T08:48:17Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.