An Iterative Co-Training Transductive Framework for Zero Shot Learning
- URL: http://arxiv.org/abs/2203.16041v1
- Date: Wed, 30 Mar 2022 04:08:44 GMT
- Title: An Iterative Co-Training Transductive Framework for Zero Shot Learning
- Authors: Bo Liu, Lihua Hu, Qiulei Dong, and Zhanyi Hu
- Abstract summary: We introduce an iterative co-training framework which contains two different base ZSL models and an exchanging module.
At each iteration, the two different ZSL models are co-trained to separately predict pseudo labels for the unseen-class samples.
Our framework can gradually boost the ZSL performance by fully exploiting the potential complementarity of the two models' classification capabilities.
- Score: 24.401200814880124
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In zero-shot learning (ZSL) community, it is generally recognized that
transductive learning performs better than inductive one as the unseen-class
samples are also used in its training stage. How to generate pseudo labels for
unseen-class samples and how to use such usually noisy pseudo labels are two
critical issues in transductive learning. In this work, we introduce an
iterative co-training framework which contains two different base ZSL models
and an exchanging module. At each iteration, the two different ZSL models are
co-trained to separately predict pseudo labels for the unseen-class samples,
and the exchanging module exchanges the predicted pseudo labels, then the
exchanged pseudo-labeled samples are added into the training sets for the next
iteration. By such, our framework can gradually boost the ZSL performance by
fully exploiting the potential complementarity of the two models'
classification capabilities. In addition, our co-training framework is also
applied to the generalized ZSL (GZSL), in which a semantic-guided OOD detector
is proposed to pick out the most likely unseen-class samples before class-level
classification to alleviate the bias problem in GZSL. Extensive experiments on
three benchmarks show that our proposed methods could significantly outperform
about $31$ state-of-the-art ones.
Related papers
- Learning with Noisy Labels Using Collaborative Sample Selection and
Contrastive Semi-Supervised Learning [76.00798972439004]
Collaborative Sample Selection (CSS) removes noisy samples from identified clean set.
We introduce a co-training mechanism with a contrastive loss in semi-supervised learning.
arXiv Detail & Related papers (2023-10-24T05:37:20Z) - SemiReward: A General Reward Model for Semi-supervised Learning [58.47299780978101]
Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling.
Main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.
We propose a Semi-supervised Reward framework (SemiReward) that predicts reward scores to evaluate and filter out high-quality pseudo labels.
arXiv Detail & Related papers (2023-10-04T17:56:41Z) - Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario [14.916971861796384]
Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance.
We propose a general approach to augment existing SSL methods, enabling them to handle situations where certain classes are missing.
Our experimental results reveal significant improvements in accuracy when compared to state-of-the-art SSL, open-set SSL, and open-world SSL methods.
arXiv Detail & Related papers (2023-08-27T14:25:07Z) - Bi-directional Distribution Alignment for Transductive Zero-Shot
Learning [48.80413182126543]
We propose a novel zero-shot learning model (TZSL) called Bi-VAEGAN.
It largely improves the shift by a strengthened distribution alignment between the visual and auxiliary spaces.
In benchmark evaluation, Bi-VAEGAN achieves the new state of the arts under both the standard and generalized TZSL settings.
arXiv Detail & Related papers (2023-03-15T15:32:59Z) - Zero-Shot Logit Adjustment [89.68803484284408]
Generalized Zero-Shot Learning (GZSL) is a semantic-descriptor-based learning technique.
In this paper, we propose a new generation-based technique to enhance the generator's effect while neglecting the improvement of the classifier.
Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative zero-shot learning frameworks.
arXiv Detail & Related papers (2022-04-25T17:54:55Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Hardness Sampling for Self-Training Based Transductive Zero-Shot
Learning [10.764160559530847]
Transductive zero-shot learning (T-ZSL) which could alleviate the domain shift problem in existing ZSL works, has received much attention recently.
We first empirically analyze the roles of unseen-class samples with different degrees of hardness in the training process.
We propose two hardness sampling approaches for selecting a subset of diverse and hard samples from a given unseen-class dataset.
arXiv Detail & Related papers (2021-06-01T06:55:19Z) - A Simple Approach for Zero-Shot Learning based on Triplet Distribution
Embeddings [6.193231258199234]
ZSL aims to recognize unseen classes without labeled training data by exploiting semantic information.
Existing ZSL methods mainly use vectors to represent the embeddings to the semantic space.
We address this issue by leveraging the use of distribution embeddings.
arXiv Detail & Related papers (2021-03-29T20:26:20Z) - Towards Zero-Shot Learning with Fewer Seen Class Examples [41.751885300474925]
We present a meta-learning based generative model for zero-shot learning (ZSL)
This setup contrasts with the conventional ZSL approaches, where training typically assumes the availability of a sufficiently large number of training examples from each of the seen classes.
We conduct extensive experiments and ablation studies on four benchmark datasets of ZSL and observe that the proposed model outperforms state-of-the-art approaches by a significant margin when the number of examples per seen class is very small.
arXiv Detail & Related papers (2020-11-14T11:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.