Description-Enhanced Label Embedding Contrastive Learning for Text
Classification
- URL: http://arxiv.org/abs/2306.08817v1
- Date: Thu, 15 Jun 2023 02:19:34 GMT
- Title: Description-Enhanced Label Embedding Contrastive Learning for Text
Classification
- Authors: Kun Zhang, Le Wu, Guangyi Lv, Enhong Chen, Shulan Ruan, Jing Liu,
Zhiqiang Zhang, Jun Zhou, Meng Wang
- Abstract summary: Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
- Score: 65.01077813330559
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text Classification is one of the fundamental tasks in natural language
processing, which requires an agent to determine the most appropriate category
for input sentences. Recently, deep neural networks have achieved impressive
performance in this area, especially Pre-trained Language Models (PLMs).
Usually, these methods concentrate on input sentences and corresponding
semantic embedding generation. However, for another essential component:
labels, most existing works either treat them as meaningless one-hot vectors or
use vanilla embedding methods to learn label representations along with model
training, underestimating the semantic information and guidance that these
labels reveal. To alleviate this problem and better exploit label information,
in this paper, we employ Self-Supervised Learning (SSL) in model learning
process and design a novel self-supervised Relation of Relation (R2)
classification task for label utilization from a one-hot manner perspective.
Then, we propose a novel Relation of Relation Learning Network (R2-Net) for
text classification, in which text classification and R2 classification are
treated as optimization targets. Meanwhile, triplet loss is employed to enhance
the analysis of differences and connections among labels. Moreover, considering
that one-hot usage is still short of exploiting label information, we
incorporate external knowledge from WordNet to obtain multi-aspect descriptions
for label semantic learning and extend R2-Net to a novel Description-Enhanced
Label Embedding network (DELE) from a label embedding perspective. ...
Related papers
- Leveraging Label Semantics and Meta-Label Refinement for Multi-Label Question Classification [11.19022605804112]
This paper introduces RR2QC, a novel Retrieval Reranking method To multi-label Question Classification.
It uses label semantics and meta-label refinement to enhance personalized learning and resource recommendation.
Experimental results demonstrate that RR2QC outperforms existing classification methods in Precision@k and F1 scores.
arXiv Detail & Related papers (2024-11-04T06:27:14Z) - Exploring Structured Semantic Prior for Multi Label Recognition with
Incomplete Labels [60.675714333081466]
Multi-label recognition (MLR) with incomplete labels is very challenging.
Recent works strive to explore the image-to-label correspondence in the vision-language model, ie, CLIP, to compensate for insufficient annotations.
We advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior.
arXiv Detail & Related papers (2023-03-23T12:39:20Z) - IDEA: Interactive DoublE Attentions from Label Embedding for Text
Classification [4.342189319523322]
We propose a novel model structure via siamese BERT and interactive double attentions named IDEA to capture the information exchange of text and label names.
Our proposed method outperforms the state-of-the-art methods using label texts significantly with more stable results.
arXiv Detail & Related papers (2022-09-23T04:50:47Z) - Label Semantic Aware Pre-training for Few-shot Text Classification [53.80908620663974]
We propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems.
LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains.
arXiv Detail & Related papers (2022-04-14T17:33:34Z) - Prompt-Learning for Short Text Classification [30.53216712864025]
In short text, the extreme short length, feature sparsity and high ambiguity pose huge challenge to classification tasks.
In this paper, we propose a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion.
arXiv Detail & Related papers (2022-02-23T08:07:06Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - ALICE: Active Learning with Contrastive Natural Language Explanations [69.03658685761538]
We propose Active Learning with Contrastive Explanations (ALICE) to improve data efficiency in learning.
ALICE learns to first use active learning to select the most informative pairs of label classes to elicit contrastive natural language explanations.
It extracts knowledge from these explanations using a semantically extracted knowledge.
arXiv Detail & Related papers (2020-09-22T01:02:07Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Description Based Text Classification with Reinforcement Learning [34.18824470728299]
We propose a new framework for text classification, in which each category label is associated with a category description.
We observe significant performance boosts over strong baselines on a wide range of text classification tasks.
arXiv Detail & Related papers (2020-02-08T02:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.