Unsupervised Domain Generalization by Learning a Bridge Across Domains
- URL: http://arxiv.org/abs/2112.02300v1
- Date: Sat, 4 Dec 2021 10:25:45 GMT
- Title: Unsupervised Domain Generalization by Learning a Bridge Across Domains
- Authors: Sivan Harary, Eli Schwartz, Assaf Arbelle, Peter Staar, Shady
Abu-Hussein, Elad Amrani, Roei Herzig, Amit Alfassy, Raja Giryes, Hilde
Kuehne, Dina Katabi, Kate Saenko, Rogerio Feris, Leonid Karlinsky
- Abstract summary: Unsupervised Domain Generalization (UDG) setup has no training supervision in neither source nor target domains.
Our approach is based on self-supervised learning of a Bridge Across Domains (BrAD) - an auxiliary bridge domain accompanied by a set of semantics preserving visual (image-to-image) mappings to BrAD from each of the training domains.
We show how using an edge-regularized BrAD our approach achieves significant gains across multiple benchmarks and a range of tasks, including UDG, Few-shot UDA, and unsupervised generalization across multi-domain datasets.
- Score: 78.855606355957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to generalize learned representations across significantly
different visual domains, such as between real photos, clipart, paintings, and
sketches, is a fundamental capacity of the human visual system. In this paper,
different from most cross-domain works that utilize some (or full) source
domain supervision, we approach a relatively new and very practical
Unsupervised Domain Generalization (UDG) setup of having no training
supervision in neither source nor target domains. Our approach is based on
self-supervised learning of a Bridge Across Domains (BrAD) - an auxiliary
bridge domain accompanied by a set of semantics preserving visual
(image-to-image) mappings to BrAD from each of the training domains. The BrAD
and mappings to it are learned jointly (end-to-end) with a contrastive
self-supervised representation model that semantically aligns each of the
domains to its BrAD-projection, and hence implicitly drives all the domains
(seen or unseen) to semantically align to each other. In this work, we show how
using an edge-regularized BrAD our approach achieves significant gains across
multiple benchmarks and a range of tasks, including UDG, Few-shot UDA, and
unsupervised generalization across multi-domain datasets (including
generalization to unseen domains and classes).
Related papers
- Domain Generalization for Domain-Linked Classes [8.738092015092207]
In the real-world, classes may often be domain-linked, i.e. expressed only in a specific domain.
We propose a Fair and cONtrastive feature-space regularization algorithm for Domain-linked DG, FOND.
arXiv Detail & Related papers (2023-06-01T16:39:50Z) - INDIGO: Intrinsic Multimodality for Domain Generalization [26.344372409315177]
We study how multimodal information can be leveraged in an "intrinsic" way to make systems generalize under unseen domains.
We propose IntriNsic multimodality for DomaIn GeneralizatiOn (INDIGO)
arXiv Detail & Related papers (2022-06-13T05:41:09Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Unsupervised Wasserstein Distance Guided Domain Adaptation for 3D
Multi-Domain Liver Segmentation [14.639633860575621]
Unsupervised domain adaptation aims to improve network performance when applying robust models trained on medical images from source domains to a new target domain.
We present an approach based on the Wasserstein distance guided disentangled representation to achieve 3D multi-domain liver segmentation.
arXiv Detail & Related papers (2020-09-06T23:48:27Z) - Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings [76.85673049332428]
Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning.
We propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately.
We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.
arXiv Detail & Related papers (2020-02-16T19:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.