DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization
- URL: http://arxiv.org/abs/2403.02714v1
- Date: Tue, 5 Mar 2024 07:10:25 GMT
- Title: DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization
- Authors: Feng Hou, Jin Yuan, Ying Yang, Yang Liu, Yang Zhang, Cheng Zhong,
Zhongchao Shi, Jianping Fan, Yong Rui and Zhiqiang He
- Abstract summary: We establish a novel dataset DomainVerse for Adaptive Domain Generalization (ADG)
Benefiting from the introduced hierarchical definition of domain shifts, DomainVerse consists of about 0.5 million images from 390 fine-grained realistic domains.
We propose two methods called Domain CLIP and Domain++ CLIP for tuning-free adaptive domain generalization.
- Score: 27.099706316752254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional cross-domain tasks, including domain adaptation and domain
generalization, rely heavily on training model by source domain data. With the
recent advance of vision-language models (VLMs), viewed as natural source
models, the cross-domain task changes to directly adapt the pre-trained source
model to arbitrary target domains equipped with prior domain knowledge, and we
name this task Adaptive Domain Generalization (ADG). However, current
cross-domain datasets have many limitations, such as unrealistic domains,
unclear domain definitions, and the inability to fine-grained domain
decomposition, which drives us to establish a novel dataset DomainVerse for
ADG. Benefiting from the introduced hierarchical definition of domain shifts,
DomainVerse consists of about 0.5 million images from 390 fine-grained
realistic domains. With the help of the constructed DomainVerse and VLMs, we
propose two methods called Domain CLIP and Domain++ CLIP for tuning-free
adaptive domain generalization. Extensive and comprehensive experiments
demonstrate the significance of the dataset and the effectiveness of the
proposed methods.
Related papers
- Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Domain-Augmented Domain Adaptation [5.292532408558036]
Unsupervised domain adaptation (UDA) enables knowledge transfer from the labelled source domain to the unlabeled target domain.
We propose the domain-augmented domain adaptation (DADA) to generate pseudo domains that have smaller discrepancies with the target domain.
We conduct extensive experiments with the state-of-the-art domain adaptation methods on four benchmark datasets.
arXiv Detail & Related papers (2022-02-21T05:42:02Z) - Exploiting Domain-Specific Features to Enhance Domain Generalization [10.774902700296249]
Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains.
Prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains.
We propose meta-Domain Specific-Domain Invariant (mD) - a novel theoretically sound framework.
arXiv Detail & Related papers (2021-10-18T15:42:39Z) - IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID [58.46907388691056]
We argue that the bridging between the source and target domains can be utilized to tackle the UDA re-ID task.
We propose an Intermediate Domain Module (IDM) to generate intermediate domains' representations on-the-fly.
Our proposed method outperforms the state-of-the-arts by a large margin in all the common UDA re-ID tasks.
arXiv Detail & Related papers (2021-08-05T07:19:46Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - VDM-DA: Virtual Domain Modeling for Source Data-free Domain Adaptation [26.959377850768423]
Domain adaptation aims to leverage a label-rich domain (the source domain) to help model learning in a label-scarce domain (the target domain)
Access to the source domain samples may not always be feasible in the real world applications due to different problems.
We propose a novel approach referred to as Virtual Domain Modeling (VDM-DA)
arXiv Detail & Related papers (2021-03-26T09:56:40Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.