Cross-Domain Sentiment Classification with In-Domain Contrastive
Learning
- URL: http://arxiv.org/abs/2012.02943v1
- Date: Sat, 5 Dec 2020 03:48:32 GMT
- Title: Cross-Domain Sentiment Classification with In-Domain Contrastive
Learning
- Authors: Tian Li and Xiang Chen and Shanghang Zhang and Zhen Dong and Kurt
Keutzer
- Abstract summary: We propose a contrastive learning framework for cross-domain sentiment classification.
We introduce in-domain contrastive learning and entropy minimization.
New state-of-the-art results our model achieves on standard benchmarks.
- Score: 38.08616968654886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning (CL) has been successful as a powerful representation
learning method. In this paper, we propose a contrastive learning framework for
cross-domain sentiment classification. We aim to induce domain invariant
optimal classifiers rather than distribution matching. To this end, we
introduce in-domain contrastive learning and entropy minimization. Also, we
find through ablation studies that these two techniques behaviour differently
in case of large label distribution shift and conclude that the best practice
is to choose one of them adaptively according to label distribution shift. The
new state-of-the-art results our model achieves on standard benchmarks show the
efficacy of the proposed method.
Related papers
- Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Unsupervised Contrastive Domain Adaptation for Semantic Segmentation [75.37470873764855]
We introduce contrastive learning for feature alignment in cross-domain adaptation.
The proposed approach consistently outperforms state-of-the-art methods for domain adaptation.
It achieves 60.2% mIoU on the Cityscapes dataset.
arXiv Detail & Related papers (2022-04-18T16:50:46Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Implicit Class-Conditioned Domain Alignment for Unsupervised Domain
Adaptation [18.90240379173491]
Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain.
We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly.
We present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels.
arXiv Detail & Related papers (2020-06-09T00:20:21Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - A Sample Selection Approach for Universal Domain Adaptation [94.80212602202518]
We study the problem of unsupervised domain adaption in the universal scenario.
Only some of the classes are shared between the source and target domains.
We present a scoring scheme that is effective in identifying the samples of the shared classes.
arXiv Detail & Related papers (2020-01-14T22:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.