On Correlating Factors for Domain Adaptation Performance
- URL: http://arxiv.org/abs/2501.14466v1
- Date: Fri, 24 Jan 2025 12:55:42 GMT
- Title: On Correlating Factors for Domain Adaptation Performance
- Authors: Goksenin Yuksel, Jaap Kamps,
- Abstract summary: We analyze the possible factors that lead to successful domain adaptation of dense retrievers.
generated query type distribution is an important factor, and generating queries that share a similar domain to the test documents improves the performance of domain adaptation methods.
- Score: 0.7305019142196582
- License:
- Abstract: Dense retrievers have demonstrated significant potential for neural information retrieval; however, they lack robustness to domain shifts, limiting their efficacy in zero-shot settings across diverse domains. In this paper, we set out to analyze the possible factors that lead to successful domain adaptation of dense retrievers. We include domain similarity proxies between generated queries to test and source domains. Furthermore, we conduct a case study comparing two powerful domain adaptation techniques. We find that generated query type distribution is an important factor, and generating queries that share a similar domain to the test documents improves the performance of domain adaptation methods. This study further emphasizes the importance of domain-tailored generated queries.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting [35.83485358725357]
Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
Existing domain adaptation methods tend to focus on inter-dataset differences while overlooking the intra-differences within the same dataset.
We propose a Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns domain-agnostic factors between domains.
arXiv Detail & Related papers (2023-08-10T02:59:40Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Domain Adaptation for Sentiment Analysis Using Increased Intraclass
Separation [31.410122245232373]
Cross-domain sentiment analysis methods have received significant attention.
We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.
This embedding space is trained to be domain-agnostic by matching the data distributions across the domains.
arXiv Detail & Related papers (2021-07-04T11:39:12Z) - Predicting the Success of Domain Adaptation in Text Similarity [0.0]
This paper models adaptation success and selection of the most suitable source domains among several candidates in text similarity.
While mostly positive, the results also point to some domains where adaptation success was difficult to predict.
arXiv Detail & Related papers (2021-06-08T19:02:15Z) - Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources [68.31273535702256]
We propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN)
C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification.
We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches.
arXiv Detail & Related papers (2020-11-17T14:50:55Z) - Improved Multi-Source Domain Adaptation by Preservation of Factors [0.0]
Domain Adaptation (DA) is a highly relevant research topic when it comes to image classification with deep neural networks.
In this paper, we describe based on a theory of visual factors how real-world scenes appear in images in general.
We show that different domains can be described by a set of so called domain factors, whose values are consistent within a domain, but can change across domains.
arXiv Detail & Related papers (2020-10-15T14:19:57Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.