Label-guided Learning for Text Classification
- URL: http://arxiv.org/abs/2002.10772v1
- Date: Tue, 25 Feb 2020 10:05:56 GMT
- Title: Label-guided Learning for Text Classification
- Authors: Xien Liu, Song Wang, Xiao Zhang, Xinxin You, Ji Wu and Dejing Dou
- Abstract summary: We propose a label-guided learning framework LguidedLearn for text representation and classification.
Our method is novel but simple that we only insert a label-guided encoding layer into the commonly used text representation learning schemas.
In our proposed framework, the label-guided layer can be easily and directly applied with a contextual encoding method to perform jointly learning.
- Score: 47.15144153830845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text classification is one of the most important and fundamental tasks in
natural language processing. Performance of this task mainly dependents on text
representation learning. Currently, most existing learning frameworks mainly
focus on encoding local contextual information between words. These methods
always neglect to exploit global clues, such as label information, for encoding
text information. In this study, we propose a label-guided learning framework
LguidedLearn for text representation and classification. Our method is novel
but simple that we only insert a label-guided encoding layer into the commonly
used text representation learning schemas. That label-guided layer performs
label-based attentive encoding to map the universal text embedding (encoded by
a contextual information learner) into different label spaces, resulting in
label-wise embeddings. In our proposed framework, the label-guided layer can be
easily and directly applied with a contextual encoding method to perform
jointly learning. Text information is encoded based on both the local
contextual information and the global label clues. Therefore, the obtained text
embeddings are more robust and discriminative for text classification.
Extensive experiments are conducted on benchmark datasets to illustrate the
effectiveness of our proposed method.
Related papers
- Efficiently Leveraging Linguistic Priors for Scene Text Spotting [63.22351047545888]
This paper proposes a method that leverages linguistic knowledge from a large text corpus to replace the traditional one-hot encoding used in auto-regressive scene text spotting and recognition models.
We generate text distributions that align well with scene text datasets, removing the need for in-domain fine-tuning.
Experimental results show that our method not only improves recognition accuracy but also enables more accurate localization of words.
arXiv Detail & Related papers (2024-02-27T01:57:09Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Improve Text Classification Accuracy with Intent Information [0.38073142980733]
Existing method does not consider the use of label information, which may weaken the performance of text classification systems in some token-aware scenarios.
We introduce the use of label information as label embedding for the task of text classification and achieve remarkable performance on benchmark dataset.
arXiv Detail & Related papers (2022-12-15T08:15:32Z) - IDEA: Interactive DoublE Attentions from Label Embedding for Text
Classification [4.342189319523322]
We propose a novel model structure via siamese BERT and interactive double attentions named IDEA to capture the information exchange of text and label names.
Our proposed method outperforms the state-of-the-art methods using label texts significantly with more stable results.
arXiv Detail & Related papers (2022-09-23T04:50:47Z) - Open-Vocabulary Multi-Label Classification via Multi-modal Knowledge
Transfer [55.885555581039895]
Multi-label zero-shot learning (ML-ZSL) focuses on transferring knowledge by a pre-trained textual label embedding.
We propose a novel open-vocabulary framework, named multimodal knowledge transfer (MKT) for multi-label classification.
arXiv Detail & Related papers (2022-07-05T08:32:18Z) - Label Semantic Aware Pre-training for Few-shot Text Classification [53.80908620663974]
We propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems.
LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains.
arXiv Detail & Related papers (2022-04-14T17:33:34Z) - Prompt-Learning for Short Text Classification [30.53216712864025]
In short text, the extreme short length, feature sparsity and high ambiguity pose huge challenge to classification tasks.
In this paper, we propose a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion.
arXiv Detail & Related papers (2022-02-23T08:07:06Z) - Scene Text Detection with Scribble Lines [59.698806258671105]
We propose to annotate texts by scribble lines instead of polygons for text detection.
It is a general labeling method for texts with various shapes and requires low labeling costs.
Experiments show that the proposed method bridges the performance gap between the weakly labeling method and the original polygon-based labeling methods.
arXiv Detail & Related papers (2020-12-09T13:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.