Co-Regularized Adversarial Learning for Multi-Domain Text Classification
- URL: http://arxiv.org/abs/2201.12796v1
- Date: Sun, 30 Jan 2022 12:15:41 GMT
- Title: Co-Regularized Adversarial Learning for Multi-Domain Text Classification
- Authors: Yuan Wu, Diana Inkpen, Ahmed El-Roby
- Abstract summary: Multi-domain text classification (MDTC) aims to leverage all available resources from multiple domains to learn a predictive model that can generalize well on these domains.
Recently, many MDTC methods adopt adversarial learning, shared-private paradigm, and entropy minimization to yield state-of-the-art results.
These approaches face three issues: (1) Minimizing domain divergence can not fully guarantee the success of domain alignment; (2) Aligning marginal feature distributions can not fully guarantee the discriminability of the learned features; and (3) Standard entropy minimization may make the predictions on unlabeled data over-confident, deteriorating the disc
- Score: 19.393393465837377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain text classification (MDTC) aims to leverage all available
resources from multiple domains to learn a predictive model that can generalize
well on these domains. Recently, many MDTC methods adopt adversarial learning,
shared-private paradigm, and entropy minimization to yield state-of-the-art
results. However, these approaches face three issues: (1) Minimizing domain
divergence can not fully guarantee the success of domain alignment; (2)
Aligning marginal feature distributions can not fully guarantee the
discriminability of the learned features; (3) Standard entropy minimization may
make the predictions on unlabeled data over-confident, deteriorating the
discriminability of the learned features. In order to address the above issues,
we propose a co-regularized adversarial learning (CRAL) mechanism for MDTC.
This approach constructs two diverse shared latent spaces, performs domain
alignment in each of them, and punishes the disagreements of these two
alignments with respect to the predictions on unlabeled data. Moreover, virtual
adversarial training (VAT) with entropy minimization is incorporated to impose
consistency regularization to the CRAL method. Experiments show that our model
outperforms state-of-the-art methods on two MDTC benchmarks.
Related papers
- Margin Discrepancy-based Adversarial Training for Multi-Domain Text
Classification [6.629561563470492]
Multi-domain text classification (MDTC) endeavors to harness available resources from correlated domains to enhance the classification accuracy of the target domain.
Most MDTC approaches that embrace adversarial training and the shared-private paradigm exhibit cutting-edge performance.
We propose a margin discrepancy-based adversarial training (MDAT) approach for MDTC, in accordance with our theoretical analysis.
arXiv Detail & Related papers (2024-03-01T11:54:14Z) - Regularized Conditional Alignment for Multi-Domain Text Classification [6.629561563470492]
We propose a method called Regularized Conditional Alignment (RCA) to align the joint distributions of domains and classes.
We employ entropy minimization and virtual adversarial training to constrain the uncertainty of predictions pertaining to unlabeled data.
Empirical results on two benchmark datasets demonstrate that our RCA approach outperforms state-of-the-art MDTC techniques.
arXiv Detail & Related papers (2023-12-18T05:52:05Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Mixup Regularized Adversarial Networks for Multi-Domain Text
Classification [16.229317527580072]
Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.
However, there are two issues for the existing methods.
We propose a mixup regularized adversarial network (MRAN) to address these two issues.
arXiv Detail & Related papers (2021-01-31T15:24:05Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Dual Mixup Regularized Learning for Adversarial Domain Adaptation [19.393393465837377]
We propose a dual mixup regularized learning (DMRL) method for unsupervised domain adaptation.
DMRL guides the classifier in enhancing consistent predictions in-between samples, and enriches the intrinsic structures of the latent space.
A series of empirical studies on four domain adaptation benchmarks demonstrate that our approach can achieve the state-of-the-art.
arXiv Detail & Related papers (2020-07-07T00:24:14Z) - Learning transferable and discriminative features for unsupervised
domain adaptation [6.37626180021317]
Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain.
In this paper, a novel method called textitlearning TransFerable and Discriminative Features for unsupervised domain adaptation (TLearning) is proposed to optimize these two objectives simultaneously.
Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-03-26T03:15:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.