Label Agnostic Pre-training for Zero-shot Text Classification
- URL: http://arxiv.org/abs/2305.16521v1
- Date: Thu, 25 May 2023 22:55:32 GMT
- Title: Label Agnostic Pre-training for Zero-shot Text Classification
- Authors: Christopher Clarke, Yuzhao Heng, Yiping Kang, Krisztian Flautner,
Lingjia Tang and Jason Mars
- Abstract summary: In real-world applications, there exists an infinite label space for describing a given text.
We introduce two new simple yet effective pre-training strategies, Implicit and Explicit pre-training.
These methods inject aspect-level understanding into the model at train time with the goal of conditioning the model to build task-level understanding.
- Score: 4.9081735096855565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional approaches to text classification typically assume the existence
of a fixed set of predefined labels to which a given text can be classified.
However, in real-world applications, there exists an infinite label space for
describing a given text. In addition, depending on the aspect (sentiment,
topic, etc.) and domain of the text (finance, legal, etc.), the interpretation
of the label can vary greatly. This makes the task of text classification,
particularly in the zero-shot scenario, extremely challenging. In this paper,
we investigate the task of zero-shot text classification with the aim of
improving the ability of pre-trained language models (PLMs) to generalize to
both seen and unseen data across varying aspects and domains. To solve this we
introduce two new simple yet effective pre-training strategies, Implicit and
Explicit pre-training. These methods inject aspect-level understanding into the
model at train time with the goal of conditioning the model to build task-level
understanding. To evaluate this, we construct and release UTCD, a new benchmark
dataset for evaluating text classification in zero-shot settings. Experimental
results on UTCD show that our approach achieves improved zero-shot
generalization on a suite of challenging datasets across an array of zero-shot
formalizations.
Related papers
- A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Gen-Z: Generative Zero-Shot Text Classification with Contextualized
Label Descriptions [50.92702206798324]
We propose a generative prompting framework for zero-shot text classification.
GEN-Z measures the LM likelihood of input text conditioned on natural language descriptions of labels.
We show that zero-shot classification with simple contextualization of the data source consistently outperforms both zero-shot and few-shot baselines.
arXiv Detail & Related papers (2023-11-13T07:12:57Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text
Classification [32.02762416063338]
PESCO is a contrastive learning framework that substantially improves the performance of zero-shot text classification.
PESCO achieves state-of-the-art performance on four benchmark text classification datasets.
arXiv Detail & Related papers (2023-05-24T09:57:06Z) - Zero-Shot Text Classification via Self-Supervised Tuning [46.9902502503747]
We propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks.
tuning the language models with unlabeled data, called self-supervised tuning.
Our model outperforms the state-of-the-art baselines on 7 out of 10 tasks.
arXiv Detail & Related papers (2023-05-19T05:47:33Z) - Like a Good Nearest Neighbor: Practical Content Moderation and Text
Classification [66.02091763340094]
Like a Good Nearest Neighbor (LaGoNN) is a modification to SetFit that introduces no learnable parameters but alters input text with information from its nearest neighbor.
LaGoNN is effective at flagging undesirable content and text classification, and improves the performance of SetFit.
arXiv Detail & Related papers (2023-02-17T15:43:29Z) - Task-Specific Embeddings for Ante-Hoc Explainable Text Classification [6.671252951387647]
We propose an alternative training objective in which we learn task-specific embeddings of text.
Our proposed objective learns embeddings such that all texts that share the same target class label should be close together.
We present extensive experiments which show that the benefits of ante-hoc explainability and incremental learning come at no cost in overall classification accuracy.
arXiv Detail & Related papers (2022-11-30T19:56:25Z) - Zero-Shot Text Classification with Self-Training [8.68603153534916]
We show that fine-tuning the zero-shot classifier on its most confident predictions leads to significant performance gains across a wide range of text classification tasks.
Self-training adapts the zero-shot model to the task at hand.
arXiv Detail & Related papers (2022-10-31T17:55:00Z) - Beyond prompting: Making Pre-trained Language Models Better Zero-shot
Learners by Clustering Representations [24.3378487252621]
We show that zero-shot text classification can be improved simply by clustering texts in the embedding spaces of pre-trained language models.
Our approach achieves an average of 20% absolute improvement over prompt-based zero-shot learning.
arXiv Detail & Related papers (2022-10-29T16:01:51Z) - Label Semantic Aware Pre-training for Few-shot Text Classification [53.80908620663974]
We propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems.
LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains.
arXiv Detail & Related papers (2022-04-14T17:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.