Semi-supervised Image Classification with Grad-CAM Consistency
- URL: http://arxiv.org/abs/2108.13673v1
- Date: Tue, 31 Aug 2021 08:26:35 GMT
- Title: Semi-supervised Image Classification with Grad-CAM Consistency
- Authors: Juyong Lee, Seunghyuk Cho
- Abstract summary: We present another version of the method with Grad-CAM consistency loss.
Our method improved the baseline ResNet model with at most 1.44 % and 0.31 $pm$ 0.59 %p accuracy improvement.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency training, which exploits both supervised and unsupervised
learning with different augmentations on image, is an effective method of
utilizing unlabeled data in semi-supervised learning (SSL) manner. Here, we
present another version of the method with Grad-CAM consistency loss, so it can
be utilized in training model with better generalization and adjustability. We
show that our method improved the baseline ResNet model with at most 1.44 % and
0.31 $\pm$ 0.59 %p accuracy improvement on average with CIFAR-10 dataset. We
conducted ablation study comparing to using only psuedo-label for consistency
training. Also, we argue that our method can adjust in different environments
when targeted to different units in the model. The code is available:
https://github.com/gimme1dollar/gradcam-consistency-semi-sup.
Related papers
- Uncertainty-aware Sampling for Long-tailed Semi-supervised Learning [89.98353600316285]
We introduce uncertainty into the modeling process for pseudo-label sampling, taking into account that the model performance on the tailed classes varies over different training stages.
This approach allows the model to perceive the uncertainty of pseudo-labels at different training stages, thereby adaptively adjusting the selection thresholds for different classes.
Compared to other methods such as the baseline method FixMatch, UDTS achieves an increase in accuracy of at least approximately 5.26%, 1.75%, 9.96%, and 1.28% on the natural scene image datasets.
arXiv Detail & Related papers (2024-01-09T08:59:39Z) - Exploring the Limits of Deep Image Clustering using Pretrained Models [1.1060425537315088]
We present a methodology that learns to classify images without labels by leveraging pretrained feature extractors.
We propose a novel objective that learns associations between image features by introducing a variant of pointwise mutual information together with instance weighting.
arXiv Detail & Related papers (2023-03-31T08:56:29Z) - Mix-Teaching: A Simple, Unified and Effective Semi-Supervised Learning
Framework for Monocular 3D Object Detection [22.074959519526605]
Mix-Teaching is an effective semi-supervised learning framework applicable to employ both labeled and unlabeled images in training stage.
Mix-Teaching consistently improves MonoFlex and GUPNet by significant margins under various labeling ratios on KITTI dataset.
arXiv Detail & Related papers (2022-07-10T12:07:25Z) - Jigsaw Clustering for Unsupervised Visual Representation Learning [68.09280490213399]
We propose a new jigsaw clustering pretext task in this paper.
Our method makes use of information from both intra- and inter-images.
It is even comparable to the contrastive learning methods when only half of training batches are used.
arXiv Detail & Related papers (2021-04-01T08:09:26Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z) - Adaptive Consistency Regularization for Semi-Supervised Transfer
Learning [31.66745229673066]
We consider semi-supervised learning and transfer learning jointly, leading to a more practical and competitive paradigm.
To better exploit the value of both pre-trained weights and unlabeled target examples, we introduce adaptive consistency regularization.
Our proposed adaptive consistency regularization outperforms state-of-the-art semi-supervised learning techniques such as Pseudo Label, Mean Teacher, and MixMatch.
arXiv Detail & Related papers (2021-03-03T05:46:39Z) - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization [86.84486065798735]
CoMatch is a new semi-supervised learning method that unifies dominant approaches.
It achieves state-of-the-art performance on multiple datasets.
arXiv Detail & Related papers (2020-11-23T02:54:57Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z) - FixMatch: Simplifying Semi-Supervised Learning with Consistency and
Confidence [93.91751021370638]
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling.
Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images.
arXiv Detail & Related papers (2020-01-21T18:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.