Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification
- URL: http://arxiv.org/abs/2102.00467v1
- Date: Sun, 31 Jan 2021 15:24:05 GMT
- Title: Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification
- Authors: Yuan Wu, Diana Inkpen, Ahmed El-Roby
- Abstract summary: Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods.
We propose a mixup regularized adversarial network (MRAN) to address these two issues.
- Score: 16.229317527580072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using the shared-private paradigm and adversarial training has significantly
improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods. First, instances from
the multiple domains are not sufficient for domain-invariant feature
extraction. Second, aligning on the marginal distributions may lead to fatal
mismatching. In this paper, we propose a mixup regularized adversarial network
(MRAN) to address these two issues. More specifically, the domain and category
mixup regularizations are introduced to enrich the intrinsic features in the
shared latent space and enforce consistent predictions in-between training
instances such that the learned features can be more domain-invariant and
discriminative. We conduct experiments on two benchmarks: The Amazon review
dataset and the FDU-MTL dataset. Our approach on these two datasets yields
average accuracies of 87.64\% and 89.0\% respectively, outperforming all
relevant baselines.
Related papers
- Regularized Conditional Alignment for Multi-Domain Text Classification [6.629561563470492]
We propose a method called Regularized Conditional Alignment (RCA) to align the joint distributions of domains and classes.
We employ entropy minimization and virtual adversarial training to constrain the uncertainty of predictions pertaining to unlabeled data.
Empirical results on two benchmark datasets demonstrate that our RCA approach outperforms state-of-the-art MDTC techniques.
arXiv Detail & Related papers (2023-12-18T05:52:05Z) - FIXED: Frustratingly Easy Domain Generalization with Mixup [53.782029033068675]
Domain generalization (DG) aims to learn a generalizable model from multiple training domains such that it can perform well on unseen target domains.
A popular strategy is to augment training data to benefit generalization through methods such as Mixupcitezhang 2018mixup.
We propose a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX)
Our approach significantly outperforms nine state-of-the-art related methods, beating the best performing baseline by 6.5% on average in terms of test accuracy.
arXiv Detail & Related papers (2022-11-07T09:38:34Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Semantic-Aware Domain Generalized Segmentation [67.49163582961877]
Deep models trained on source domain lack generalization when evaluated on unseen target domains with different data distributions.
We propose a framework including two novel modules: Semantic-Aware Normalization (SAN) and Semantic-Aware Whitening (SAW)
Our approach shows significant improvements over existing state-of-the-art on various backbone networks.
arXiv Detail & Related papers (2022-04-02T09:09:59Z) - Aligning Domain-specific Distribution and Classifier for Cross-domain
Classification from Multiple Sources [25.204055330850164]
We propose a new framework with two alignment stages for Unsupervised Domain Adaptation.
Our method can achieve remarkable results on popular benchmark datasets for image classification.
arXiv Detail & Related papers (2022-01-04T06:35:11Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.