Better Self-training for Image Classification through Self-supervision
- URL: http://arxiv.org/abs/2109.00778v1
- Date: Thu, 2 Sep 2021 08:24:41 GMT
- Title: Better Self-training for Image Classification through Self-supervision
- Authors: Attaullah Sahito, Eibe Frank, and Bernhard Pfahringer
- Abstract summary: Self-supervision is learning without manual supervision by solving an automatically-generated pretext task.
This paper investigates three ways of incorporating self-supervision into self-training to improve accuracy in image classification.
- Score: 3.492636597449942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-training is a simple semi-supervised learning approach: Unlabelled
examples that attract high-confidence predictions are labelled with their
predictions and added to the training set, with this process being repeated
multiple times. Recently, self-supervision -- learning without manual
supervision by solving an automatically-generated pretext task -- has gained
prominence in deep learning. This paper investigates three different ways of
incorporating self-supervision into self-training to improve accuracy in image
classification: self-supervision as pretraining only, self-supervision
performed exclusively in the first iteration of self-training, and
self-supervision added to every iteration of self-training. Empirical results
on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from
scratch, and Imagenet-pretrained weights, show that applying self-supervision
only in the first iteration of self-training can greatly improve accuracy, for
a modest increase in computation time.
Related papers
- A Comparative Study of Pre-training and Self-training [0.40964539027092917]
We propose an ensemble method to empirical study all feasible training paradigms combining pre-training, self-training, and fine-tuning.
We conduct experiments on six datasets, four data augmentation, and imbalanced data for sentiment analysis and natural language inference tasks.
Our findings confirm that the pre-training and fine-tuning paradigm yields the best overall performances.
arXiv Detail & Related papers (2024-09-04T14:30:13Z) - Analyzing the Sample Complexity of Self-Supervised Image Reconstruction
Methods [24.840134419242414]
Supervised training of deep neural networks on pairs of clean image and noisy measurement achieves state-of-the-art performance for many image reconstruction tasks.
Self-supervised methods enable training based on noisy measurements only, without clean images.
We analytically show that a model trained with such self-supervised training is as good as the same model trained in a supervised fashion.
arXiv Detail & Related papers (2023-05-30T14:42:04Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Improving In-Context Few-Shot Learning via Self-Supervised Training [48.801037246764935]
We propose to use self-supervision in an intermediate training stage between pretraining and downstream few-shot usage.
We find that the intermediate self-supervision stage produces models that outperform strong baselines.
arXiv Detail & Related papers (2022-05-03T18:01:07Z) - SLIP: Self-supervision meets Language-Image Pre-training [79.53764315471543]
We study whether self-supervised learning can aid in the use of language supervision for visual representation learning.
We introduce SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training.
We find that SLIP enjoys the best of both worlds: better performance than self-supervision and language supervision.
arXiv Detail & Related papers (2021-12-23T18:07:13Z) - Self-Supervised Pretraining Improves Self-Supervised Pretraining [83.1423204498361]
Self-supervised pretraining requires expensive and lengthy computation, large amounts of data, and is sensitive to data augmentation.
This paper explores Hierarchical PreTraining (HPT), which decreases convergence time and improves accuracy by initializing the pretraining process with an existing pretrained model.
We show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pretraining data.
arXiv Detail & Related papers (2021-03-23T17:37:51Z) - Bootstrapped Self-Supervised Training with Monocular Video for Semantic
Segmentation and Depth Estimation [11.468537169201083]
We formalize a bootstrapped self-supervised learning problem where a system is initially bootstrapped with supervised training on a labeled dataset.
In this work, we leverage temporal consistency between frames in monocular video to perform this bootstrapped self-supervised training.
In addition, we show that the bootstrapped self-supervised training framework can help a network learn depth estimation better than pure supervised training or self-supervised training.
arXiv Detail & Related papers (2021-03-19T21:28:58Z) - Self-supervised self-supervision by combining deep learning and
probabilistic logic [10.515109852315168]
We propose Self-Supervised Self-Supervision (S4) to learn new self-supervision automatically.
S4 is able to automatically propose accurate self-supervision and can often nearly match the accuracy of supervised methods with a tiny fraction of the human effort.
arXiv Detail & Related papers (2020-12-23T04:06:41Z) - How Well Do Self-Supervised Models Transfer? [92.16372657233394]
We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks.
We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition.
No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved.
arXiv Detail & Related papers (2020-11-26T16:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.