A Robust Contrastive Alignment Method For Multi-Domain Text
Classification
- URL: http://arxiv.org/abs/2204.12125v1
- Date: Tue, 26 Apr 2022 07:34:24 GMT
- Title: A Robust Contrastive Alignment Method For Multi-Domain Text
Classification
- Authors: Xuefeng Li, Hao Lei, Liwen Wang, Guanting Dong, Jinzheng Zhao, Jiachi
Liu, Weiran Xu, Chunyun Zhang
- Abstract summary: Multi-domain text classification can automatically classify texts in various scenarios.
Current advanced methods use the private-shared paradigm, capturing domain-shared features by a shared encoder, and training a private encoder for each domain to extract domain-specific features.
We propose a robust contrastive alignment method to align text classification features of various domains in the same feature space by supervised contrastive learning.
- Score: 21.35729884948437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain text classification can automatically classify texts in various
scenarios. Due to the diversity of human languages, texts with the same label
in different domains may differ greatly, which brings challenges to the
multi-domain text classification. Current advanced methods use the
private-shared paradigm, capturing domain-shared features by a shared encoder,
and training a private encoder for each domain to extract domain-specific
features. However, in realistic scenarios, these methods suffer from
inefficiency as new domains are constantly emerging. In this paper, we propose
a robust contrastive alignment method to align text classification features of
various domains in the same feature space by supervised contrastive learning.
By this means, we only need two universal feature extractors to achieve
multi-domain text classification. Extensive experimental results show that our
method performs on par with or sometimes better than the state-of-the-art
method, which uses the complex multi-classifier in a private-shared framework.
Related papers
- A Curriculum Learning Approach for Multi-domain Text Classification
Using Keyword weight Ranking [17.71297141482757]
We propose to use a curriculum learning strategy based on keyword weight ranking to improve the performance of multi-domain text classification models.
The experimental results on the Amazon review and FDU-MTL datasets show that our curriculum learning strategy effectively improves the performance of multi-domain text classification models.
arXiv Detail & Related papers (2022-10-27T03:15:26Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - Structured Latent Embeddings for Recognizing Unseen Classes in Unseen
Domains [108.11746235308046]
We propose a novel approach that learns domain-agnostic structured latent embeddings by projecting images from different domains.
Our experiments on the challenging DomainNet and DomainNet-LS benchmarks show the superiority of our approach over existing methods.
arXiv Detail & Related papers (2021-07-12T17:57:46Z) - Learning to Share by Masking the Non-shared for Multi-domain Sentiment
Classification [24.153584996936424]
We propose a network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
Empirical experiments on a well-adopted multiple domain sentiment classification dataset demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-04-17T08:15:29Z) - Text Recognition in Real Scenarios with a Few Labeled Samples [55.07859517380136]
Scene text recognition (STR) is still a hot research topic in computer vision field.
This paper proposes a few-shot adversarial sequence domain adaptation (FASDA) approach to build sequence adaptation.
Our approach can maximize the character-level confusion between the source domain and the target domain.
arXiv Detail & Related papers (2020-06-22T13:03:01Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Improving Domain-Adapted Sentiment Classification by Deep Adversarial
Mutual Learning [51.742040588834996]
Domain-adapted sentiment classification refers to training on a labeled source domain to well infer document-level sentiment on an unlabeled target domain.
We propose a novel deep adversarial mutual learning approach involving two groups of feature extractors, domain discriminators, sentiment classifiers, and label probers.
arXiv Detail & Related papers (2020-02-01T01:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.