Rethinking Semi-supervised Learning with Language Models
- URL: http://arxiv.org/abs/2305.13002v1
- Date: Mon, 22 May 2023 13:07:35 GMT
- Title: Rethinking Semi-supervised Learning with Language Models
- Authors: Zhengxiang Shi, Francesco Tonolini, Nikolaos Aletras, Emine Yilmaz,
Gabriella Kazai, Yunlong Jiao
- Abstract summary: Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance.
There are two popular approaches to make use of unlabelled data: Self-training (ST) and Task-adaptive pre-training (TAPT)
- Score: 33.70349754359132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning (SSL) is a popular setting aiming to effectively
utilize unlabelled data to improve model performance in downstream natural
language processing (NLP) tasks. Currently, there are two popular approaches to
make use of unlabelled data: Self-training (ST) and Task-adaptive pre-training
(TAPT). ST uses a teacher model to assign pseudo-labels to the unlabelled data,
while TAPT continues pre-training on the unlabelled data before fine-tuning. To
the best of our knowledge, the effectiveness of TAPT in SSL tasks has not been
systematically studied, and no previous work has directly compared TAPT and ST
in terms of their ability to utilize the pool of unlabelled data. In this
paper, we provide an extensive empirical study comparing five state-of-the-art
ST approaches and TAPT across various NLP tasks and data sizes, including in-
and out-of-domain settings. Surprisingly, we find that TAPT is a strong and
more robust SSL learner, even when using just a few hundred unlabelled samples
or in the presence of domain shifts, compared to more sophisticated ST
approaches, and tends to bring greater improvements in SSL than in
fully-supervised settings. Our further analysis demonstrates the risks of using
ST approaches when the size of labelled or unlabelled data is small or when
domain shifts exist. We offer a fresh perspective for future SSL research,
suggesting the use of unsupervised pre-training objectives over dependency on
pseudo labels.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Co-training for Low Resource Scientific Natural Language Inference [65.37685198688538]
We propose a novel co-training method that assigns weights based on the training dynamics of the classifiers to the distantly supervised labels.
By assigning importance weights instead of filtering out examples based on an arbitrary threshold on the predicted confidence, we maximize the usage of automatically labeled data.
The proposed method obtains an improvement of 1.5% in Macro F1 over the distant supervision baseline, and substantial improvements over several other strong SSL baselines.
arXiv Detail & Related papers (2024-06-20T18:35:47Z) - Learning with Partial Labels from Semi-supervised Perspective [28.735185883881172]
Partial Label (PL) learning refers to the task of learning from partially labeled data.
We propose a novel PL learning method, namely Partial Label learning with Semi-Supervised Perspective (PLSP)
PLSP significantly outperforms the existing PL baseline methods, especially on high ambiguity levels.
arXiv Detail & Related papers (2022-11-24T15:12:16Z) - Learning to Infer from Unlabeled Data: A Semi-supervised Learning
Approach for Robust Natural Language Inference [47.293189105900524]
Natural Language Inference (NLI) aims at predicting the relation between a pair of sentences (premise and hypothesis) as entailment, contradiction or semantic independence.
Deep learning models have shown promising performance for NLI in recent years, they rely on large scale expensive human-annotated datasets.
Semi-supervised learning (SSL) is a popular technique for reducing the reliance on human annotation by leveraging unlabeled data for training.
arXiv Detail & Related papers (2022-11-05T20:34:08Z) - Pseudo-Labeled Auto-Curriculum Learning for Semi-Supervised Keypoint
Localization [88.74813798138466]
Localizing keypoints of an object is a basic visual problem.
Supervised learning of a keypoint localization network often requires a large amount of data.
We propose to automatically select reliable pseudo-labeled samples with a series of dynamic thresholds.
arXiv Detail & Related papers (2022-01-21T09:51:58Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Self-supervised Regularization for Text Classification [14.824073299035675]
In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting.
We propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL)
SSL is an unsupervised learning approach which defines auxiliary tasks on input data without using any human-provided labels.
arXiv Detail & Related papers (2021-03-09T05:35:52Z) - Self-Tuning for Data-Efficient Deep Learning [75.34320911480008]
Self-Tuning is a novel approach to enable data-efficient deep learning.
It unifies the exploration of labeled and unlabeled data and the transfer of a pre-trained model.
It outperforms its SSL and TL counterparts on five tasks by sharp margins.
arXiv Detail & Related papers (2021-02-25T14:56:19Z) - Matching Distributions via Optimal Transport for Semi-Supervised
Learning [31.533832244923843]
Semi-Supervised Learning (SSL) approaches have been an influential framework for the usage of unlabeled data.
We propose a new approach that adopts an Optimal Transport (OT) technique serving as a metric of similarity between discrete empirical probability measures.
We have evaluated our proposed method with state-of-the-art SSL algorithms on standard datasets to demonstrate the superiority and effectiveness of our SSL algorithm.
arXiv Detail & Related papers (2020-12-04T11:15:14Z) - Transfer Learning or Self-supervised Learning? A Tale of Two Pretraining
Paradigms [36.04356511882304]
Self-supervised learning (SSL) has demonstrated promising results on a wide range of applications.
There has not been a clear understanding on what properties of data and tasks render one approach outperforms the other.
arXiv Detail & Related papers (2020-06-19T05:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.