ADRMX: Additive Disentanglement of Domain Features with Remix Loss
- URL: http://arxiv.org/abs/2308.06624v1
- Date: Sat, 12 Aug 2023 17:52:21 GMT
- Title: ADRMX: Additive Disentanglement of Domain Features with Remix Loss
- Authors: Berker Demirel, Erchan Aptoula and Huseyin Ozkan
- Abstract summary: Domain generalization aims to create robust models capable of generalizing to new unseen domains.
In this work, a novel architecture named Additive Disentanglement of Domain Features with Remix Loss is presented.
- Score: 7.206800397427553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The common assumption that train and test sets follow similar distributions
is often violated in deployment settings. Given multiple source domains, domain
generalization aims to create robust models capable of generalizing to new
unseen domains. To this end, most of existing studies focus on extracting
domain invariant features across the available source domains in order to
mitigate the effects of inter-domain distributional changes. However, this
approach may limit the model's generalization capacity by relying solely on
finding common features among the source domains. It overlooks the potential
presence of domain-specific characteristics that could be prevalent in a subset
of domains, potentially containing valuable information. In this work, a novel
architecture named Additive Disentanglement of Domain Features with Remix Loss
(ADRMX) is presented, which addresses this limitation by incorporating domain
variant features together with the domain invariant ones using an original
additive disentanglement strategy. Moreover, a new data augmentation technique
is introduced to further support the generalization capacity of ADRMX, where
samples from different domains are mixed within the latent space. Through
extensive experiments conducted on DomainBed under fair conditions, ADRMX is
shown to achieve state-of-the-art performance. Code will be made available at
GitHub after the revision process.
Related papers
- Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Domain generalization Person Re-identification on Attention-aware
multi-operation strategery [8.90472129039969]
Domain generalization person re-identification (DG Re-ID) aims to directly deploy a model trained on the source domain to the unseen target domain with good generalization.
In the existing DG Re-ID methods, invariant operations are effective in extracting domain generalization features.
An Attention-aware Multi-operation Strategery (AMS) for DG Re-ID is proposed to extract more generalized features.
arXiv Detail & Related papers (2022-10-19T09:18:46Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Exploiting Domain-Specific Features to Enhance Domain Generalization [10.774902700296249]
Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains.
Prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains.
We propose meta-Domain Specific-Domain Invariant (mD) - a novel theoretically sound framework.
arXiv Detail & Related papers (2021-10-18T15:42:39Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Generalized Domain Conditioned Adaptation Network [33.13337928537281]
Domain Adaptation (DA) attempts to transfer knowledge learned in labeled source domain to the unlabeled but related target domain.
Recent advances in DA mainly proceed by aligning the source and target distributions.
We develop Generalized Domain Conditioned Adaptation Network (GDCAN) to automatically determine whether domain channel activations should be separately modeled in each attention module.
arXiv Detail & Related papers (2021-03-23T06:24:26Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.