IDEA: Interactive DoublE Attentions from Label Embedding for Text
Classification
- URL: http://arxiv.org/abs/2209.11407v1
- Date: Fri, 23 Sep 2022 04:50:47 GMT
- Title: IDEA: Interactive DoublE Attentions from Label Embedding for Text
Classification
- Authors: Ziyuan Wang, Hailiang Huang, Songqiao Han
- Abstract summary: We propose a novel model structure via siamese BERT and interactive double attentions named IDEA to capture the information exchange of text and label names.
Our proposed method outperforms the state-of-the-art methods using label texts significantly with more stable results.
- Score: 4.342189319523322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current text classification methods typically encode the text merely into
embedding before a naive or complicated classifier, which ignores the
suggestive information contained in the label text. As a matter of fact, humans
classify documents primarily based on the semantic meaning of the
subcategories. We propose a novel model structure via siamese BERT and
interactive double attentions named IDEA ( Interactive DoublE Attentions) to
capture the information exchange of text and label names. Interactive double
attentions enable the model to exploit the inter-class and intra-class
information from coarse to fine, which involves distinguishing among all labels
and matching the semantical subclasses of ground truth labels. Our proposed
method outperforms the state-of-the-art methods using label texts significantly
with more stable results.
Related papers
- Unlocking the Multi-modal Potential of CLIP for Generalized Category Discovery [50.564146730579424]
We propose a Text Embedding Synthesizer (TES) to generate pseudo text embeddings for unlabelled samples.
Our method unlocks the multi-modal potentials of CLIP and outperforms the baseline methods by a large margin on all GCD benchmarks.
arXiv Detail & Related papers (2024-03-15T02:40:13Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Label Semantic Aware Pre-training for Few-shot Text Classification [53.80908620663974]
We propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems.
LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains.
arXiv Detail & Related papers (2022-04-14T17:33:34Z) - Prompt-Learning for Short Text Classification [30.53216712864025]
In short text, the extreme short length, feature sparsity and high ambiguity pose huge challenge to classification tasks.
In this paper, we propose a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion.
arXiv Detail & Related papers (2022-02-23T08:07:06Z) - MATCH: Metadata-Aware Text Classification in A Large Hierarchy [60.59183151617578]
MATCH is an end-to-end framework that leverages both metadata and hierarchy information.
We propose different ways to regularize the parameters and output probability of each child label by its parents.
Experiments on two massive text datasets with large-scale label hierarchies demonstrate the effectiveness of MATCH.
arXiv Detail & Related papers (2021-02-15T05:23:08Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z) - Label-guided Learning for Text Classification [47.15144153830845]
We propose a label-guided learning framework LguidedLearn for text representation and classification.
Our method is novel but simple that we only insert a label-guided encoding layer into the commonly used text representation learning schemas.
In our proposed framework, the label-guided layer can be easily and directly applied with a contextual encoding method to perform jointly learning.
arXiv Detail & Related papers (2020-02-25T10:05:56Z) - Description Based Text Classification with Reinforcement Learning [34.18824470728299]
We propose a new framework for text classification, in which each category label is associated with a category description.
We observe significant performance boosts over strong baselines on a wide range of text classification tasks.
arXiv Detail & Related papers (2020-02-08T02:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.