Pseudo Labeling and Negative Feedback Learning for Large-scale
Multi-label Domain Classification
- URL: http://arxiv.org/abs/2003.03728v1
- Date: Sun, 8 Mar 2020 06:00:15 GMT
- Title: Pseudo Labeling and Negative Feedback Learning for Large-scale
Multi-label Domain Classification
- Authors: Joo-Kyung Kim and Young-Bum Kim
- Abstract summary: In large-scale domain classification, an utterance can be handled by multiple domains with overlapped capabilities.
In this paper, given one ground-truth domain for each training utterance, we regard domains consistently predicted with the highest confidences as additional pseudo labels for the training.
In order to reduce prediction errors due to incorrect pseudo labels, we leverage utterances with negative system responses to decrease the confidences of the incorrectly predicted domains.
- Score: 18.18754040189615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In large-scale domain classification, an utterance can be handled by multiple
domains with overlapped capabilities. However, only a limited number of
ground-truth domains are provided for each training utterance in practice while
knowing as many as correct target labels is helpful for improving the model
performance. In this paper, given one ground-truth domain for each training
utterance, we regard domains consistently predicted with the highest
confidences as additional pseudo labels for the training. In order to reduce
prediction errors due to incorrect pseudo labels, we leverage utterances with
negative system responses to decrease the confidences of the incorrectly
predicted domains. Evaluating on user utterances from an intelligent
conversational system, we show that the proposed approach significantly
improves the performance of domain classification with hypothesis reranking.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - A Label Proportions Estimation Technique for Adversarial Domain
Adaptation in Text Classification [31.788796579355274]
We introduce a domain adversarial network with label proportions estimation (DAN-LPE) framework.
The DAN-LPE simultaneously trains a domain adversarial net and processes label proportions estimation by the confusion of the source domain and the predictions of the target domain.
Experiments show the DAN-LPE achieves a good estimate of the target label distributions and reduces the label shift to improve the classification performance.
arXiv Detail & Related papers (2020-03-16T21:16:00Z) - Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain
Adaptive Semantic Segmentation [49.295165476818866]
This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation.
Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data.
This paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning.
arXiv Detail & Related papers (2020-03-08T12:37:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.