Debiasing, calibrating, and improving Semi-supervised Learning
performance via simple Ensemble Projector
- URL: http://arxiv.org/abs/2310.15764v1
- Date: Tue, 24 Oct 2023 12:11:19 GMT
- Title: Debiasing, calibrating, and improving Semi-supervised Learning
performance via simple Ensemble Projector
- Authors: Khanh-Binh Nguyen
- Abstract summary: We propose a simple method named Ensemble Projectors Aided for Semi-supervised Learning (EPASS)
Unlike standard methods, EPASS stores the ensemble embeddings from multiple projectors in memory banks.
EPASS improves generalization, strengthens feature representation, and boosts performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent studies on semi-supervised learning (SSL) have achieved great success.
Despite their promising performance, current state-of-the-art methods tend
toward increasingly complex designs at the cost of introducing more network
components and additional training procedures. In this paper, we propose a
simple method named Ensemble Projectors Aided for Semi-supervised Learning
(EPASS), which focuses mainly on improving the learned embeddings to boost the
performance of the existing contrastive joint-training semi-supervised learning
frameworks. Unlike standard methods, where the learned embeddings from one
projector are stored in memory banks to be used with contrastive learning,
EPASS stores the ensemble embeddings from multiple projectors in memory banks.
As a result, EPASS improves generalization, strengthens feature representation,
and boosts performance. For instance, EPASS improves strong baselines for
semi-supervised learning by 39.47\%/31.39\%/24.70\% top-1 error rate, while
using only 100k/1\%/10\% of labeled data for SimMatch, and achieves
40.24\%/32.64\%/25.90\% top-1 error rate for CoMatch on the ImageNet dataset.
These improvements are consistent across methods, network architectures, and
datasets, proving the general effectiveness of the proposed methods. Code is
available at https://github.com/beandkay/EPASS.
Related papers
- Weighted Ensemble Self-Supervised Learning [67.24482854208783]
Ensembling has proven to be a powerful technique for boosting model performance.
We develop a framework that permits data-dependent weighted cross-entropy losses.
Our method outperforms both in multiple evaluation metrics on ImageNet-1K.
arXiv Detail & Related papers (2022-11-18T02:00:17Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - Contextualized Spatio-Temporal Contrastive Learning with
Self-Supervision [106.77639982059014]
We present ConST-CL framework to effectively learn-temporally fine-grained representations.
We first design a region-based self-supervised task which requires the model to learn to transform instance representations from one view to another guided by context features.
We then introduce a simple design that effectively reconciles the simultaneous learning of both holistic and local representations.
arXiv Detail & Related papers (2021-12-09T19:13:41Z) - MIO : Mutual Information Optimization using Self-Supervised Binary
Contrastive Learning [19.5917119072985]
We model contrastive learning into a binary classification problem to predict if a pair is positive or not.
The proposed method outperforms the state-of-the-art algorithms on benchmark datasets like STL-10, CIFAR-10, CIFAR-100.
arXiv Detail & Related papers (2021-11-24T17:51:29Z) - To be Critical: Self-Calibrated Weakly Supervised Learning for Salient
Object Detection [95.21700830273221]
Weakly-supervised salient object detection (WSOD) aims to develop saliency models using image-level annotations.
We propose a self-calibrated training strategy by explicitly establishing a mutual calibration loop between pseudo labels and network predictions.
We prove that even a much smaller dataset with well-matched annotations can facilitate models to achieve better performance as well as generalizability.
arXiv Detail & Related papers (2021-09-04T02:45:22Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised
Performance [0.0]
We show the potential for building one-shot semi-supervised (BOSS) learning on Cifar-10 and SVHN.
Our method combines class prototype refining, class balancing, and self-training.
Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks.
arXiv Detail & Related papers (2020-06-16T17:56:00Z) - Generalized Reinforcement Meta Learning for Few-Shot Optimization [3.7675996866306845]
We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
Our framework could be easily extended to do network architecture search.
arXiv Detail & Related papers (2020-05-04T03:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.