STAR: Noisy Semi-Supervised Transfer Learning for Visual Classification
- URL: http://arxiv.org/abs/2108.08362v1
- Date: Wed, 18 Aug 2021 19:35:05 GMT
- Title: STAR: Noisy Semi-Supervised Transfer Learning for Visual Classification
- Authors: Hasib Zunair, Yan Gobeil, Samuel Mercier, A. Ben Hamza
- Abstract summary: Semi-supervised learning (SSL) has proven to be effective at leveraging large-scale unlabeled data.
Recent SSL methods rely on unlabeled image data at a scale of billions to work well.
We propose noisy semi-supervised transfer learning, which integrates transfer learning and self-training with noisy student.
- Score: 0.8662293148437356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semi-supervised learning (SSL) has proven to be effective at leveraging
large-scale unlabeled data to mitigate the dependency on labeled data in order
to learn better models for visual recognition and classification tasks.
However, recent SSL methods rely on unlabeled image data at a scale of billions
to work well. This becomes infeasible for tasks with relatively fewer unlabeled
data in terms of runtime, memory and data acquisition. To address this issue,
we propose noisy semi-supervised transfer learning, an efficient SSL approach
that integrates transfer learning and self-training with noisy student into a
single framework, which is tailored for tasks that can leverage unlabeled image
data on a scale of thousands. We evaluate our method on both binary and
multi-class classification tasks, where the objective is to identify whether an
image displays people practicing sports or the type of sport, as well as to
identify the pose from a pool of popular yoga poses. Extensive experiments and
ablation studies demonstrate that by leveraging unlabeled data, our proposed
framework significantly improves visual classification, especially in
multi-class classification settings compared to state-of-the-art methods.
Moreover, incorporating transfer learning not only improves classification
performance, but also requires 6x less compute time and 5x less memory. We also
show that our method boosts robustness of visual classification models, even
without specifically optimizing for adversarial robustness.
Related papers
- Two-Step Active Learning for Instance Segmentation with Uncertainty and
Diversity Sampling [20.982992381790034]
We propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling.
Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets.
arXiv Detail & Related papers (2023-09-28T03:40:30Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Domain Adaptive Multiple Instance Learning for Instance-level Prediction
of Pathological Images [45.132775668689604]
We propose a new task setting to improve the classification performance of the target dataset without increasing annotation costs.
In order to combine the supervisory information of both methods effectively, we propose a method to create pseudo-labels with high confidence.
arXiv Detail & Related papers (2023-04-07T08:31:06Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z) - UniVIP: A Unified Framework for Self-Supervised Visual Pre-training [50.87603616476038]
We propose a novel self-supervised framework to learn versatile visual representations on either single-centric-object or non-iconic dataset.
Massive experiments show that UniVIP pre-trained on non-iconic COCO achieves state-of-the-art transfer performance.
Our method can also exploit single-centric-object dataset such as ImageNet and outperforms BYOL by 2.5% with the same pre-training epochs in linear probing.
arXiv Detail & Related papers (2022-03-14T10:04:04Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Few-Shot Learning for Image Classification of Common Flora [0.0]
We will showcase our results from testing various state-of-the-art transfer learning weights and architectures versus similar state-of-the-art works in the meta-learning field for image classification utilizing Model-Agnostic Meta Learning (MAML)
Our results show that both practices provide adequate performance when the dataset is sufficiently large, but that they both also struggle when data sparsity is introduced to maintain sufficient performance.
arXiv Detail & Related papers (2021-05-07T03:54:51Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z) - Boosting the Performance of Semi-Supervised Learning with Unsupervised
Clustering [10.033658645311188]
We show that ignoring labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime.
We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms.
arXiv Detail & Related papers (2020-12-01T14:19:14Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Automatically Discovering and Learning New Visual Categories with
Ranking Statistics [145.89790963544314]
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
arXiv Detail & Related papers (2020-02-13T18:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.