DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
- URL: http://arxiv.org/abs/2205.05575v1
- Date: Wed, 11 May 2022 15:43:48 GMT
- Title: DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
- Authors: Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand
- Abstract summary: Semi-supervised learning (SSL) is becoming increasingly popular.
We propose a new SSL algorithm, DoubleMatch, which combines the pseudo-labeling technique with a self-supervised loss.
We show that this method achieves state-of-the-art accuracies on multiple benchmark datasets while also reducing training times compared to existing SSL methods.
- Score: 16.757456364034798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Following the success of supervised learning, semi-supervised learning (SSL)
is now becoming increasingly popular. SSL is a family of methods, which in
addition to a labeled training set, also use a sizable collection of unlabeled
data for fitting a model. Most of the recent successful SSL methods are based
on pseudo-labeling approaches: letting confident model predictions act as
training labels. While these methods have shown impressive results on many
benchmark datasets, a drawback of this approach is that not all unlabeled data
are used during training. We propose a new SSL algorithm, DoubleMatch, which
combines the pseudo-labeling technique with a self-supervised loss, enabling
the model to utilize all unlabeled data in the training process. We show that
this method achieves state-of-the-art accuracies on multiple benchmark datasets
while also reducing training times compared to existing SSL methods. Code is
available at https://github.com/walline/doublematch.
Related papers
- FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness
for Semi-Supervised Learning [73.13448439554497]
Semi-Supervised Learning (SSL) has been an effective way to leverage abundant unlabeled data with extremely scarce labeled data.
Most SSL methods are commonly based on instance-wise consistency between different data transformations.
We propose FlatMatch which minimizes a cross-sharpness measure to ensure consistent learning performance between the two datasets.
arXiv Detail & Related papers (2023-10-25T06:57:59Z) - Boosting Semi-Supervised Learning by bridging high and low-confidence
predictions [4.18804572788063]
Pseudo-labeling is a crucial technique in semi-supervised learning (SSL)
We propose a new method called ReFixMatch, which aims to utilize all of the unlabeled data during training.
arXiv Detail & Related papers (2023-08-15T00:27:18Z) - ProtoCon: Pseudo-label Refinement via Online Clustering and Prototypical
Consistency for Efficient Semi-supervised Learning [60.57998388590556]
ProtoCon is a novel method for confidence-based pseudo-labeling.
Online nature of ProtoCon allows it to utilise the label history of the entire dataset in one training cycle.
It delivers significant gains and faster convergence over state-of-the-art datasets.
arXiv Detail & Related papers (2023-03-22T23:51:54Z) - Improving Open-Set Semi-Supervised Learning with Self-Supervision [13.944469874692459]
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning.
We propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision.
Our method yields state-of-the-art results on many of the evaluated benchmark problems.
arXiv Detail & Related papers (2023-01-24T16:46:37Z) - Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning [93.63638405586354]
We propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML)
Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data.
We build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression.
arXiv Detail & Related papers (2022-07-14T10:53:53Z) - OpenLDN: Learning to Discover Novel Classes for Open-World
Semi-Supervised Learning [110.40285771431687]
Semi-supervised learning (SSL) is one of the dominant approaches to address the annotation bottleneck of supervised learning.
Recent SSL methods can effectively leverage a large repository of unlabeled data to improve performance while relying on a small set of labeled data.
This work introduces OpenLDN that utilizes a pairwise similarity loss to discover novel classes.
arXiv Detail & Related papers (2022-07-05T18:51:05Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - Rethinking Re-Sampling in Imbalanced Semi-Supervised Learning [26.069534478556527]
Semi-Supervised Learning (SSL) has shown its strong ability in utilizing unlabeled data when labeled data is scarce.
Most SSL algorithms work under the assumption that the class distributions are balanced in both training and test sets.
In this work, we consider the problem of SSL on class-imbalanced data, which better reflects real-world situations.
arXiv Detail & Related papers (2021-06-01T03:58:18Z) - FixMatch: Simplifying Semi-Supervised Learning with Consistency and
Confidence [93.91751021370638]
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling.
Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images.
arXiv Detail & Related papers (2020-01-21T18:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.