PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized
Embedding Models
- URL: http://arxiv.org/abs/2006.09075v1
- Date: Tue, 16 Jun 2020 11:14:06 GMT
- Title: PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized
Embedding Models
- Authors: Eyal Ben-David, Carmel Rabinovitz, Roi Reichart
- Abstract summary: PERL: A representation learning model that extends contextualized word embedding models such as BERT with pivot-based fine-tuning.
PerL outperforms strong baselines across 22 sentiment classification domain adaptation setups.
It yields effective reduced-size models and increases model stability.
- Score: 20.62501560076402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pivot-based neural representation models have lead to significant progress in
domain adaptation for NLP. However, previous works that follow this approach
utilize only labeled data from the source domain and unlabeled data from the
source and target domains, but neglect to incorporate massive unlabeled corpora
that are not necessarily drawn from these domains. To alleviate this, we
propose PERL: A representation learning model that extends contextualized word
embedding models such as BERT with pivot-based fine-tuning. PERL outperforms
strong baselines across 22 sentiment classification domain adaptation setups,
improves in-domain model performance, yields effective reduced-size models and
increases model stability.
Related papers
- GenGMM: Generalized Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation [0.9626666671366837]
We introduce the Generalized Gaussian-mixture-based (GenGMM) domain adaptation model, which harnesses the underlying data distribution in both domains.
Experiments demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-21T20:21:09Z) - Prior-guided Source-free Domain Adaptation for Human Pose Estimation [24.50953879583841]
Domain adaptation methods for 2D human pose estimation typically require continuous access to the source data.
We present Prior-guided Self-training (POST), a pseudo-labeling approach that builds on the popular Mean Teacher framework.
arXiv Detail & Related papers (2023-08-26T20:30:04Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Variational Model Perturbation for Source-Free Domain Adaptation [64.98560348412518]
We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework.
We demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains.
arXiv Detail & Related papers (2022-10-19T08:41:19Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for
Semantic Segmentation Without Using Source Domain Representations [35.586031601299034]
Unsupervised BatchNorm Adaptation (UBNA) adapts a given pre-trained model to an unseen target domain.
We partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor.
Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.
arXiv Detail & Related papers (2020-11-17T08:37:40Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.