Learning to Share by Masking the Non-shared for Multi-domain Sentiment
Classification
- URL: http://arxiv.org/abs/2104.08480v1
- Date: Sat, 17 Apr 2021 08:15:29 GMT
- Title: Learning to Share by Masking the Non-shared for Multi-domain Sentiment
Classification
- Authors: Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu
- Abstract summary: We propose a network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
Empirical experiments on a well-adopted multiple domain sentiment classification dataset demonstrate the effectiveness of our proposed model.
- Score: 24.153584996936424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain sentiment classification deals with the scenario where labeled
data exists for multiple domains but insufficient for training effective
sentiment classifiers that work across domains. Thus, fully exploiting
sentiment knowledge shared across domains is crucial for real world
applications. While many existing works try to extract domain-invariant
features in high-dimensional space, such models fail to explicitly distinguish
between shared and private features at text-level, which to some extent lacks
interpretablity. Based on the assumption that removing domain-related tokens
from texts would help improve their domain-invariance, we instead first
transform original sentences to be domain-agnostic. To this end, we propose the
BertMasker network which explicitly masks domain-related words from texts,
learns domain-invariant sentiment features from these domain-agnostic texts,
and uses those masked words to form domain-aware sentence representations.
Empirical experiments on a well-adopted multiple domain sentiment
classification dataset demonstrate the effectiveness of our proposed model on
both multi-domain sentiment classification and cross-domain settings, by
increasing the accuracy by 0.94% and 1.8% respectively. Further analysis on
masking proves that removing those domain-related and sentiment irrelevant
tokens decreases texts' domain distinction, resulting in the performance
degradation of a BERT-based domain classifier by over 12%.
Related papers
- ReMask: A Robust Information-Masking Approach for Domain Counterfactual
Generation [16.275230631985824]
Domain counterfactual generation aims to transform a text from the source domain to a given target domain.
We employ a three-step domain obfuscation approach that involves frequency and attention norm-based masking, to mask domain-specific cues, and unmasking to regain the domain generic context.
Our model outperforms the state-of-the-art by achieving 1.4% average accuracy improvement in the adversarial domain adaptation setting.
arXiv Detail & Related papers (2023-05-04T14:19:02Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Topic Driven Adaptive Network for Cross-Domain Sentiment Classification [6.196375060616161]
We propose a Topic Driven Adaptive Network (TDAN) for cross-domain sentiment classification.
The network consists of two sub-networks: semantics attention network and domain-specific word attention network.
Experiments validate the effectiveness of our TDAN on sentiment classification across domains.
arXiv Detail & Related papers (2021-11-28T10:17:11Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Domain Agnostic Learning for Unbiased Authentication [47.85174796247398]
We propose a domain-agnostic method that eliminates domain-difference without domain labels.
latent domains are discovered by learning the heterogeneous predictive relationships between inputs and outputs.
We extend our method to a meta-learning framework to pursue more thorough domain-difference elimination.
arXiv Detail & Related papers (2020-10-11T14:05:16Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Improving Domain-Adapted Sentiment Classification by Deep Adversarial
Mutual Learning [51.742040588834996]
Domain-adapted sentiment classification refers to training on a labeled source domain to well infer document-level sentiment on an unlabeled target domain.
We propose a novel deep adversarial mutual learning approach involving two groups of feature extractors, domain discriminators, sentiment classifiers, and label probers.
arXiv Detail & Related papers (2020-02-01T01:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.