Efficient Domain Generalization via Common-Specific Low-Rank
Decomposition
- URL: http://arxiv.org/abs/2003.12815v2
- Date: Tue, 7 Apr 2020 04:28:06 GMT
- Title: Efficient Domain Generalization via Common-Specific Low-Rank
Decomposition
- Authors: Vihari Piratla, Praneeth Netrapalli, Sunita Sarawagi
- Abstract summary: Domain generalization refers to the task of training a model which generalizes to new domains that are not seen during training.
We present CSD (Common Specific Decomposition), which jointly learns a common component (which generalizes to new domains) and a domain specific component (which overfits on training domains)
The algorithm is extremely simple and involves only modifying the final linear classification layer of any given neural network architecture.
- Score: 40.98883072715791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization refers to the task of training a model which
generalizes to new domains that are not seen during training. We present CSD
(Common Specific Decomposition), for this setting,which jointly learns a common
component (which generalizes to new domains) and a domain specific component
(which overfits on training domains). The domain specific components are
discarded after training and only the common component is retained. The
algorithm is extremely simple and involves only modifying the final linear
classification layer of any given neural network architecture. We present a
principled analysis to understand existing approaches, provide identifiability
results of CSD,and study effect of low-rank on domain generalization. We show
that CSD either matches or beats state of the art approaches for domain
generalization based on domain erasure, domain perturbed data augmentation, and
meta-learning. Further diagnostics on rotated MNIST, where domains are
interpretable, confirm the hypothesis that CSD successfully disentangles common
and domain specific components and hence leads to better domain generalization.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Domain-General Crowd Counting in Unseen Scenarios [25.171343652312974]
Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
arXiv Detail & Related papers (2022-12-05T19:52:28Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Domain Generalization in Biosignal Classification [37.70077538403524]
This study is the first to investigate domain generalization for biosignal data.
Our proposed method achieves accuracy gains of up to 16% for four completely unseen domains.
arXiv Detail & Related papers (2020-11-12T05:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.