Multiple-Source Domain Adaptation via Coordinated Domain Encoders and
Paired Classifiers
- URL: http://arxiv.org/abs/2201.11870v1
- Date: Fri, 28 Jan 2022 00:50:01 GMT
- Title: Multiple-Source Domain Adaptation via Coordinated Domain Encoders and
Paired Classifiers
- Authors: Payam Karisani
- Abstract summary: We present a novel model for text classification under domain shift.
It exploits the update representations to dynamically integrate domain encoders.
It also employs a probabilistic model to infer the error rate in the target domain.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel multiple-source unsupervised model for text classification
under domain shift. Our model exploits the update rates in document
representations to dynamically integrate domain encoders. It also employs a
probabilistic heuristic to infer the error rate in the target domain in order
to pair source classifiers. Our heuristic exploits data transformation cost and
the classifier accuracy in the target feature space. We have used real world
scenarios of Domain Adaptation to evaluate the efficacy of our algorithm. We
also used pretrained multi-layer transformers as the document encoder in the
experiments to demonstrate whether the improvement achieved by domain
adaptation models can be delivered by out-of-the-box language model
pretraining. The experiments testify that our model is the top performing
approach in this setting.
Related papers
- Learning to Generalize Unseen Domains via Multi-Source Meta Learning for Text Classification [71.08024880298613]
We study the multi-source Domain Generalization of text classification.
We propose a framework to use multiple seen domains to train a model that can achieve high accuracy in an unseen domain.
arXiv Detail & Related papers (2024-09-20T07:46:21Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Multi-step domain adaptation by adversarial attack to $\mathcal{H}
\Delta \mathcal{H}$-divergence [73.89838982331453]
In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to improve source accuracy on the target domain.
We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.
arXiv Detail & Related papers (2022-07-18T21:24:05Z) - Variational Transfer Learning using Cross-Domain Latent Modulation [1.9662978733004601]
We introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning.
Deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal.
The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied.
arXiv Detail & Related papers (2022-05-31T03:47:08Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources [68.31273535702256]
We propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN)
C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification.
We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches.
arXiv Detail & Related papers (2020-11-17T14:50:55Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.