Cluster & Tune: Boost Cold Start Performance in Text Classification
- URL: http://arxiv.org/abs/2203.10581v1
- Date: Sun, 20 Mar 2022 15:29:34 GMT
- Title: Cluster & Tune: Boost Cold Start Performance in Text Classification
- Authors: Eyal Shnarch, Ariel Gera, Alon Halfon, Lena Dankin, Leshem Choshen,
Ranit Aharonov, Noam Slonim
- Abstract summary: In real-world scenarios, a text classification task often begins with a cold start, when labeled data is scarce.
We suggest a method to boost the performance of such models by adding an intermediate unsupervised classification task.
- Score: 21.957605438780224
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In real-world scenarios, a text classification task often begins with a cold
start, when labeled data is scarce. In such cases, the common practice of
fine-tuning pre-trained models, such as BERT, for a target classification task,
is prone to produce poor performance. We suggest a method to boost the
performance of such models by adding an intermediate unsupervised
classification task, between the pre-training and fine-tuning phases. As such
an intermediate task, we perform clustering and train the pre-trained model on
predicting the cluster labels. We test this hypothesis on various data sets,
and show that this additional classification phase can significantly improve
performance, mainly for topical classification tasks, when the number of
labeled instances available for fine-tuning is only a couple of dozen to a few
hundred.
Related papers
- Bridging the Gap: Learning Pace Synchronization for Open-World Semi-Supervised Learning [44.91863420044712]
In open-world semi-supervised learning, a machine learning model is tasked with uncovering novel categories from unlabeled data.
We introduce 1) the adaptive synchronizing marginal loss which imposes class-specific negative margins to alleviate the model bias towards seen classes, and 2) the pseudo-label contrastive clustering which exploits pseudo-labels predicted by the model to group unlabeled data from the same category together.
Our method balances the learning pace between seen and novel classes, achieving a remarkable 3% average accuracy increase on the ImageNet dataset.
arXiv Detail & Related papers (2023-09-21T09:44:39Z) - ProTeCt: Prompt Tuning for Taxonomic Open Set Classification [59.59442518849203]
Few-shot adaptation methods do not fare well in the taxonomic open set (TOS) setting.
We propose a prompt tuning technique that calibrates the hierarchical consistency of model predictions.
A new Prompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed to calibrate classification across label set granularities.
arXiv Detail & Related papers (2023-06-04T02:55:25Z) - Self-supervised Pretraining with Classification Labels for Temporal
Activity Detection [54.366236719520565]
Temporal Activity Detection aims to predict activity classes per frame.
Due to the expensive frame-level annotations required for detection, the scale of detection datasets is limited.
This work proposes a novel self-supervised pretraining method for detection leveraging classification labels.
arXiv Detail & Related papers (2021-11-26T18:59:28Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Coarse2Fine: Fine-grained Text Classification on Coarsely-grained
Annotated Data [22.81068960545234]
We introduce a new problem called coarse-to-fine grained classification, which aims to perform fine-grained classification on coarsely annotated data.
Instead of asking for new fine-grained human annotations, we opt to leverage label surface names as the only human guidance.
Our framework uses the fine-tuned generative models to sample pseudo-training data for training the classifier, and bootstraps on real unlabeled data for model refinement.
arXiv Detail & Related papers (2021-09-22T17:29:01Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Task-Adaptive Clustering for Semi-Supervised Few-Shot Classification [23.913195015484696]
Few-shot learning aims to handle previously unseen tasks using only a small amount of new training data.
In preparing (or meta-training) a few-shot learner, however, massive labeled data are necessary.
In this work, we propose a few-shot learner that can work well under the semi-supervised setting where a large portion of training data is unlabeled.
arXiv Detail & Related papers (2020-03-18T13:50:19Z) - Document Ranking with a Pretrained Sequence-to-Sequence Model [56.44269917346376]
We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words"
Our approach significantly outperforms an encoder-only model in a data-poor regime.
arXiv Detail & Related papers (2020-03-14T22:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.