A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained
Classification
- URL: http://arxiv.org/abs/2104.00679v1
- Date: Thu, 1 Apr 2021 17:59:41 GMT
- Title: A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained
Classification
- Authors: Jong-Chyi Su and Zezhou Cheng and Subhransu Maji
- Abstract summary: Our benchmark consists of two fine-grained classification datasets obtained by sampling classes from the Aves and Fungi taxonomy.
We find that recently proposed SSL methods provide significant benefits, and can effectively use out-of-class data to improve performance when deep networks are trained from scratch.
Our work suggests that semi-supervised learning with experts on realistic datasets may require different strategies than those currently prevalent in the literature.
- Score: 38.68079253627819
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We evaluate the effectiveness of semi-supervised learning (SSL) on a
realistic benchmark where data exhibits considerable class imbalance and
contains images from novel classes. Our benchmark consists of two fine-grained
classification datasets obtained by sampling classes from the Aves and Fungi
taxonomy. We find that recently proposed SSL methods provide significant
benefits, and can effectively use out-of-class data to improve performance when
deep networks are trained from scratch. Yet their performance pales in
comparison to a transfer learning baseline, an alternative approach for
learning from a few examples. Furthermore, in the transfer setting, while
existing SSL methods provide improvements, the presence of out-of-class is
often detrimental. In this setting, standard fine-tuning followed by
distillation-based self-training is the most robust. Our work suggests that
semi-supervised learning with experts on realistic datasets may require
different strategies than those currently prevalent in the literature.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Co-training for Low Resource Scientific Natural Language Inference [65.37685198688538]
We propose a novel co-training method that assigns weights based on the training dynamics of the classifiers to the distantly supervised labels.
By assigning importance weights instead of filtering out examples based on an arbitrary threshold on the predicted confidence, we maximize the usage of automatically labeled data.
The proposed method obtains an improvement of 1.5% in Macro F1 over the distant supervision baseline, and substantial improvements over several other strong SSL baselines.
arXiv Detail & Related papers (2024-06-20T18:35:47Z) - Using Self-supervised Learning Can Improve Model Fairness [10.028637666224093]
Self-supervised learning (SSL) has become the de facto training paradigm of large models.
This study explores the impact of pre-training and fine-tuning strategies on fairness.
We introduce a fairness assessment framework for SSL, comprising five stages: defining dataset requirements, pre-training, fine-tuning with gradual unfreezing, assessing representation similarity conditioned on demographics, and establishing domain-specific evaluation processes.
arXiv Detail & Related papers (2024-06-04T14:38:30Z) - Reinforcement Learning-Guided Semi-Supervised Learning [20.599506122857328]
We propose a novel Reinforcement Learning Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem.
RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance.
We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
arXiv Detail & Related papers (2024-05-02T21:52:24Z) - Self-supervised visual learning in the low-data regime: a comparative evaluation [40.27083924454058]
Self-Supervised Learning (SSL) is a robust training methodology for contemporary Deep Neural Networks (DNNs)
This work introduces a taxonomy of modern visual SSL methods, accompanied by detailed explanations and insights regarding the main categories of approaches.
For domain-specific downstream tasks, in-domain low-data SSL pretraining outperforms the common approach of large-scale pretraining.
arXiv Detail & Related papers (2024-04-26T07:23:14Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Pruning the Unlabeled Data to Improve Semi-Supervised Learning [17.62242617965356]
We present PruneSSL, a technique for selectively removing examples from the original unlabeled dataset to enhance its separability.
Although PruneSSL reduces the quantity of available training data for the learner, it significantly improves the performance of various competitive SSL algorithms.
arXiv Detail & Related papers (2023-08-27T09:45:41Z) - Improving Open-Set Semi-Supervised Learning with Self-Supervision [13.944469874692459]
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning.
We propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision.
Our method yields state-of-the-art results on many of the evaluated benchmark problems.
arXiv Detail & Related papers (2023-01-24T16:46:37Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.