GCT: Graph Co-Training for Semi-Supervised Few-Shot Learning
- URL: http://arxiv.org/abs/2203.07738v4
- Date: Tue, 19 Mar 2024 16:03:09 GMT
- Title: GCT: Graph Co-Training for Semi-Supervised Few-Shot Learning
- Authors: Rui Xu, Lei Xing, Shuai Shao, Lifei Zhao, Baodi Liu, Weifeng Liu, Yicong Zhou,
- Abstract summary: We propose a novel label prediction method, Isolated Graph Learning (IGL)
IGL introduces the Laplacian operator to encode the raw data to graph space, which helps reduce the dependence on features when classifying.
Second, we propose Graph Co-Training (GCT) to tackle this challenge from a multi-modal fusion perspective.
- Score: 46.07923908746946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning (FSL), purposing to resolve the problem of data-scarce, has attracted considerable attention in recent years. A popular FSL framework contains two phases: (i) the pre-train phase employs the base data to train a CNN-based feature extractor. (ii) the meta-test phase applies the frozen feature extractor to novel data (novel data has different categories from base data) and designs a classifier for recognition. To correct few-shot data distribution, researchers propose Semi-Supervised Few-Shot Learning (SSFSL) by introducing unlabeled data. Although SSFSL has been proved to achieve outstanding performances in the FSL community, there still exists a fundamental problem: the pre-trained feature extractor can not adapt to the novel data flawlessly due to the cross-category setting. Usually, large amounts of noises are introduced to the novel feature. We dub it as Feature-Extractor-Maladaptive (FEM) problem. To tackle FEM, we make two efforts in this paper. First, we propose a novel label prediction method, Isolated Graph Learning (IGL). IGL introduces the Laplacian operator to encode the raw data to graph space, which helps reduce the dependence on features when classifying, and then project graph representation to label space for prediction. The key point is that: IGL can weaken the negative influence of noise from the feature representation perspective, and is also flexible to independently complete training and testing procedures, which is suitable for SSFSL. Second, we propose Graph Co-Training (GCT) to tackle this challenge from a multi-modal fusion perspective by extending the proposed IGL to the co-training framework. GCT is a semi-supervised method that exploits the unlabeled samples with two modal features to crossly strengthen the IGL classifier.
Related papers
- ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [36.749959232724514]
ZeroG is a new framework tailored to enable cross-dataset generalization.
We address the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer.
We propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs.
arXiv Detail & Related papers (2024-02-17T09:52:43Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - MHFC: Multi-Head Feature Collaboration for Few-Shot Learning [17.699793591135904]
Few-shot learning aims to address the data-scarce problem.
We propose Multi-Head Feature Collaboration (MHFC) algorithm, which attempts to project the multi-head features to a unified space.
We evaluate the proposed method on five benchmark datasets and achieve significant improvements of 2.1%-7.8% compared with state-of-the-arts.
arXiv Detail & Related papers (2021-09-16T08:09:35Z) - Federated Graph Learning -- A Position Paper [36.424411232612606]
Federated learning (FL) is an emerging technique that can collaboratively train a shared model while keeping the data decentralized.
We term it as federated graph learning (FGL)
Considering how graph data are distributed among clients, we propose four types of FGL: inter-graph FL, intra-graph FL and graph-structured FL.
arXiv Detail & Related papers (2021-05-24T05:39:24Z) - Revisiting Few-shot Relation Classification: Evaluation Data and
Classification Schemes [57.34346419239118]
We propose a novel methodology for deriving more realistic few-shot test data from available datasets for supervised RC.
This yields a new challenging benchmark for FSL RC, on which state of the art models show poor performance.
We propose a novel classification scheme, in which the NOTA category is represented as learned vectors.
arXiv Detail & Related papers (2021-04-17T08:16:49Z) - PTN: A Poisson Transfer Network for Semi-supervised Few-shot Learning [21.170726615606185]
We propose a Poisson Transfer Network (PTN) to mine the unlabeled information for semi-supervised few-shot learning.
Our scheme implicitly learns the novel-class embeddings to alleviate the over-fitting problem on the few labeled data.
arXiv Detail & Related papers (2020-12-20T04:44:37Z) - Hybrid Consistency Training with Prototype Adaptation for Few-Shot
Learning [11.873143649261362]
Few-Shot Learning aims to improve a model's generalization capability in low data regimes.
Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc.
arXiv Detail & Related papers (2020-11-19T19:51:33Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Contrastive and Generative Graph Convolutional Networks for Graph-based
Semi-Supervised Learning [64.98816284854067]
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph.
A novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure.
arXiv Detail & Related papers (2020-09-15T13:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.