Adaptive Consistency Regularization for Semi-Supervised Transfer
Learning
- URL: http://arxiv.org/abs/2103.02193v1
- Date: Wed, 3 Mar 2021 05:46:39 GMT
- Title: Adaptive Consistency Regularization for Semi-Supervised Transfer
Learning
- Authors: Abulikemu Abuduweili, Xingjian Li, Humphrey Shi, Cheng-Zhong Xu,
Dejing Dou
- Abstract summary: We consider semi-supervised learning and transfer learning jointly, leading to a more practical and competitive paradigm.
To better exploit the value of both pre-trained weights and unlabeled target examples, we introduce adaptive consistency regularization.
Our proposed adaptive consistency regularization outperforms state-of-the-art semi-supervised learning techniques such as Pseudo Label, Mean Teacher, and MixMatch.
- Score: 31.66745229673066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent studies on semi-supervised learning have shown remarkable
progress in leveraging both labeled and unlabeled data, most of them presume a
basic setting of the model is randomly initialized. In this work, we consider
semi-supervised learning and transfer learning jointly, leading to a more
practical and competitive paradigm that can utilize both powerful pre-trained
models from source domain as well as labeled/unlabeled data in the target
domain. To better exploit the value of both pre-trained weights and unlabeled
target examples, we introduce adaptive consistency regularization that consists
of two complementary components: Adaptive Knowledge Consistency (AKC) on the
examples between the source and target model, and Adaptive Representation
Consistency (ARC) on the target model between labeled and unlabeled examples.
Examples involved in the consistency regularization are adaptively selected
according to their potential contributions to the target task. We conduct
extensive experiments on several popular benchmarks including CUB-200-2011, MIT
Indoor-67, MURA, by fine-tuning the ImageNet pre-trained ResNet-50 model.
Results show that our proposed adaptive consistency regularization outperforms
state-of-the-art semi-supervised learning techniques such as Pseudo Label, Mean
Teacher, and MixMatch. Moreover, our algorithm is orthogonal to existing
methods and thus able to gain additional improvements on top of MixMatch and
FixMatch. Our code is available at
https://github.com/SHI-Labs/Semi-Supervised-Transfer-Learning.
Related papers
- AdaSemiCD: An Adaptive Semi-Supervised Change Detection Method Based on Pseudo-Label Evaluation [0.0]
We present an adaptive dynamic semi-supervised learning method, AdaCD, to improve the use of pseudo-labels and optimize the training process.
Experimental results from LEVIR-CD, WHU-CD, and CDD datasets validate the efficacy and universality of our proposed adaptive training framework.
arXiv Detail & Related papers (2024-11-12T12:35:34Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Adaptive Weighted Co-Learning for Cross-Domain Few-Shot Learning [23.615250207134004]
Cross-domain few-shot learning (CDFSL) induces a very challenging adaptation problem.
We propose a simple Adaptive Weighted Co-Learning (AWCoL) method to address the CDFSL challenge.
Comprehensive experiments are conducted on multiple benchmark datasets and the empirical results demonstrate that the proposed method produces state-of-the-art CDFSL performance.
arXiv Detail & Related papers (2023-12-06T22:09:52Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Universal Semi-supervised Model Adaptation via Collaborative Consistency
Training [92.52892510093037]
We introduce a realistic and challenging domain adaptation problem called Universal Semi-supervised Model Adaptation (USMA)
We propose a collaborative consistency training framework that regularizes the prediction consistency between two models.
Experimental results demonstrate the effectiveness of our method on several benchmark datasets.
arXiv Detail & Related papers (2023-07-07T08:19:40Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - CLIPood: Generalizing CLIP to Out-of-Distributions [73.86353105017076]
Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances.
We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on unseen test data.
Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
arXiv Detail & Related papers (2023-02-02T04:27:54Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.