Hardness Sampling for Self-Training Based Transductive Zero-Shot
Learning
- URL: http://arxiv.org/abs/2106.00264v1
- Date: Tue, 1 Jun 2021 06:55:19 GMT
- Title: Hardness Sampling for Self-Training Based Transductive Zero-Shot
Learning
- Authors: Liu Bo, Qiulei Dong, Zhanyi Hu
- Abstract summary: Transductive zero-shot learning (T-ZSL) which could alleviate the domain shift problem in existing ZSL works, has received much attention recently.
We first empirically analyze the roles of unseen-class samples with different degrees of hardness in the training process.
We propose two hardness sampling approaches for selecting a subset of diverse and hard samples from a given unseen-class dataset.
- Score: 10.764160559530847
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transductive zero-shot learning (T-ZSL) which could alleviate the domain
shift problem in existing ZSL works, has received much attention recently.
However, an open problem in T-ZSL: how to effectively make use of unseen-class
samples for training, still remains. Addressing this problem, we first
empirically analyze the roles of unseen-class samples with different degrees of
hardness in the training process based on the uneven prediction phenomenon
found in many ZSL methods, resulting in three observations. Then, we propose
two hardness sampling approaches for selecting a subset of diverse and hard
samples from a given unseen-class dataset according to these observations. The
first one identifies the samples based on the class-level frequency of the
model predictions while the second enhances the former by normalizing the class
frequency via an approximate class prior estimated by an explored prior
estimation algorithm. Finally, we design a new Self-Training framework with
Hardness Sampling for T-ZSL, called STHS, where an arbitrary inductive ZSL
method could be seamlessly embedded and it is iteratively trained with
unseen-class samples selected by the hardness sampling approach. We introduce
two typical ZSL methods into the STHS framework and extensive experiments
demonstrate that the derived T-ZSL methods outperform many state-of-the-art
methods on three public benchmarks. Besides, we note that the unseen-class
dataset is separately used for training in some existing transductive
generalized ZSL (T-GZSL) methods, which is not strict for a GZSL task. Hence,
we suggest a more strict T-GZSL data setting and establish a competitive
baseline on this setting by introducing the proposed STHS framework to T-GZSL.
Related papers
- Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning [4.137391543972184]
Semi-supervised learning (SSL) has witnessed remarkable progress, resulting in numerous method variations.
In this paper, we present a novel SSL approach named FineSSL that significantly addresses this limitation by adapting pre-trained foundation models.
We demonstrate that FineSSL sets a new state of the art for SSL on multiple benchmark datasets, reduces the training cost by over six times, and can seamlessly integrate various fine-tuning and modern SSL algorithms.
arXiv Detail & Related papers (2024-05-20T03:33:12Z) - An Iterative Co-Training Transductive Framework for Zero Shot Learning [24.401200814880124]
We introduce an iterative co-training framework which contains two different base ZSL models and an exchanging module.
At each iteration, the two different ZSL models are co-trained to separately predict pseudo labels for the unseen-class samples.
Our framework can gradually boost the ZSL performance by fully exploiting the potential complementarity of the two models' classification capabilities.
arXiv Detail & Related papers (2022-03-30T04:08:44Z) - Pseudo-Labeled Auto-Curriculum Learning for Semi-Supervised Keypoint
Localization [88.74813798138466]
Localizing keypoints of an object is a basic visual problem.
Supervised learning of a keypoint localization network often requires a large amount of data.
We propose to automatically select reliable pseudo-labeled samples with a series of dynamic thresholds.
arXiv Detail & Related papers (2022-01-21T09:51:58Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Generative Zero-Shot Learning for Semantic Segmentation of 3D Point
Cloud [79.99653758293277]
We present the first generative approach for both Zero-Shot Learning (ZSL) and Generalized ZSL (GZSL) on 3D data.
We show that it reaches or outperforms the state of the art on ModelNet40 classification for both inductive ZSL and inductive GZSL.
Our experiments show that our method outperforms strong baselines, which we additionally propose for this task.
arXiv Detail & Related papers (2021-08-13T13:29:27Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.