Domain-Aware Fine-Tuning of Foundation Models
- URL: http://arxiv.org/abs/2407.03482v2
- Date: Wed, 10 Jul 2024 13:27:20 GMT
- Title: Domain-Aware Fine-Tuning of Foundation Models
- Authors: Ugur Ali Kaplan, Margret Keuper, Anna Khoreva, Dan Zhang, Yumeng Li,
- Abstract summary: Foundation models (FMs) have revolutionized computer vision, enabling effective learning across different domains.
This paper investigates the zero-shot domain adaptation potential of FMs by comparing different backbone architectures.
We introduce novel domain-aware components that leverage domain related textual embeddings.
- Score: 18.336887359257087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation models (FMs) have revolutionized computer vision, enabling effective learning across different domains. However, their performance under domain shift is yet underexplored. This paper investigates the zero-shot domain adaptation potential of FMs by comparing different backbone architectures and introducing novel domain-aware components that leverage domain related textual embeddings. We propose domain adaptive normalization, termed as Domino, which explicitly leverages domain embeddings during fine-tuning, thus making the model domain aware. Ultimately, Domino enables more robust computer vision models that can adapt effectively to various unseen domains.
Related papers
- Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation [29.044218200986695]
This paper focuses on features with significant differences across various domains in both distributions and effects on model predictions.
We propose a domain-sensitive feature attribution method to identify features that best reflect domain distinctions from the feature set.
We design a memory architecture that extracts domain-specific information from domain-sensitive features for the model to retrieve and integrate.
arXiv Detail & Related papers (2024-05-21T16:02:06Z) - DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization [27.099706316752254]
We establish a novel dataset DomainVerse for Adaptive Domain Generalization (ADG)
Benefiting from the introduced hierarchical definition of domain shifts, DomainVerse consists of about 0.5 million images from 390 fine-grained realistic domains.
We propose two methods called Domain CLIP and Domain++ CLIP for tuning-free adaptive domain generalization.
arXiv Detail & Related papers (2024-03-05T07:10:25Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - M2D2: A Massively Multi-domain Language Modeling Dataset [76.13062203588089]
We present M2D2, a fine-grained, massively multi-domain corpus for studying domain adaptation (LMs)
Using categories derived from Wikipedia and ArXiv, we organize the domains in each data source into 22 groups.
We show the benefits of adapting the LM along a domain hierarchy; adapting to smaller amounts of fine-grained domain-specific data can lead to larger in-domain performance gains.
arXiv Detail & Related papers (2022-10-13T21:34:52Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.