Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2007.09257v1
- Date: Fri, 17 Jul 2020 22:05:09 GMT
- Title: Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation
- Authors: Xingchao Peng, Yichen Li, Kate Saenko
- Abstract summary: Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
- Score: 56.94873619509414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional unsupervised domain adaptation (UDA) studies the knowledge
transfer between a limited number of domains. This neglects the more practical
scenario where data are distributed in numerous different domains in the real
world. The domain similarity between those domains is critical for domain
adaptation performance. To describe and learn relations between different
domains, we propose a novel Domain2Vec model to provide vectorial
representations of visual domains based on joint learning of feature
disentanglement and Gram matrix. To evaluate the effectiveness of our
Domain2Vec model, we create two large-scale cross-domain benchmarks. The first
one is TinyDA, which contains 54 domains and about one million MNIST-style
images. The second benchmark is DomainBank, which is collected from 56 existing
vision datasets. We demonstrate that our embedding is capable of predicting
domain similarities that match our intuition about visual relations between
different domains. Extensive experiments are conducted to demonstrate the power
of our new datasets in benchmarking state-of-the-art multi-source domain
adaptation methods, as well as the advantage of our proposed model.
Related papers
- DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization [27.099706316752254]
We establish a novel dataset DomainVerse for Adaptive Domain Generalization (ADG)
Benefiting from the introduced hierarchical definition of domain shifts, DomainVerse consists of about 0.5 million images from 390 fine-grained realistic domains.
We propose two methods called Domain CLIP and Domain++ CLIP for tuning-free adaptive domain generalization.
arXiv Detail & Related papers (2024-03-05T07:10:25Z) - DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting [35.83485358725357]
Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
Existing domain adaptation methods tend to focus on inter-dataset differences while overlooking the intra-differences within the same dataset.
We propose a Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns domain-agnostic factors between domains.
arXiv Detail & Related papers (2023-08-10T02:59:40Z) - M2D2: A Massively Multi-domain Language Modeling Dataset [76.13062203588089]
We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation (LMs)
Using categories derived from Wikipedia and ArXiv, we organize the domains in each data source into 22 groups.
We show the benefits of adapting the LM along a domain hierarchy; adapting to smaller amounts of fine-grained domain-specific data can lead to larger in-domain performance gains.
arXiv Detail & Related papers (2022-10-13T21:34:52Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Provable Adaptation across Multiway Domains via Representation Learning [41.40595345884889]
This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array.
We propose a model which consists of a domain-invariant latent representation layer and a domain-specific linear prediction layer with a low-rank tensor structure.
arXiv Detail & Related papers (2021-06-12T01:15:23Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Adapting Object Detectors with Conditional Domain Normalization [38.13526570506076]
Conditional Domain Normalization (CDN) is designed to encode different domain inputs into a shared latent space.
We incorporate CDN into various convolution stages of an object detector to adaptively address the domain shifts of different level's representation.
Tests show that CDN outperforms existing methods remarkably on both real-to-real and synthetic-to-real adaptation benchmarks.
arXiv Detail & Related papers (2020-03-16T08:27:29Z) - Multi-Source Domain Adaptation for Text Classification via
DistanceNet-Bandits [101.68525259222164]
We present a study of various distance-based measures in the context of NLP tasks, that characterize the dissimilarity between domains based on sample estimates.
We develop a DistanceNet model which uses these distance measures as an additional loss function to be minimized jointly with the task's loss function.
We extend this model to a novel DistanceNet-Bandit model, which employs a multi-armed bandit controller to dynamically switch between multiple source domains.
arXiv Detail & Related papers (2020-01-13T15:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.