Hybrid Graph Neural Networks for Few-Shot Learning
- URL: http://arxiv.org/abs/2112.06538v1
- Date: Mon, 13 Dec 2021 10:20:15 GMT
- Title: Hybrid Graph Neural Networks for Few-Shot Learning
- Authors: Tianyuan Yu, Sen He, Yi-Zhe Song, Tao Xiang
- Abstract summary: Graph neural networks (GNNs) have been used to tackle the few-shot learning problem.
Under the inductive setting, existing GNN based methods are less competitive.
We propose a novel hybrid GNN model consisting of two GNNs, an instance GNN and a prototype GNN.
- Score: 85.93495480949079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) have been used to tackle the few-shot learning
(FSL) problem and shown great potentials under the transductive setting.
However under the inductive setting, existing GNN based methods are less
competitive. This is because they use an instance GNN as a label
propagation/classification module, which is jointly meta-learned with a feature
embedding network. This design is problematic because the classifier needs to
adapt quickly to new tasks while the embedding does not. To overcome this
problem, in this paper we propose a novel hybrid GNN (HGNN) model consisting of
two GNNs, an instance GNN and a prototype GNN. Instead of label propagation,
they act as feature embedding adaptation modules for quick adaptation of the
meta-learned feature embedding to new tasks. Importantly they are designed to
deal with a fundamental yet often neglected challenge in FSL, that is, with
only a handful of shots per class, any few-shot classifier would be sensitive
to badly sampled shots which are either outliers or can cause inter-class
distribution overlapping. %Our two GNNs are designed to address these two types
of poorly sampled few-shots respectively and their complementarity is exploited
in the hybrid GNN model. Extensive experiments show that our HGNN obtains new
state-of-the-art on three FSL benchmarks.
Related papers
- E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised Classification [30.55931541782854]
This work studies ensemble learning for graph neural networks (GNNs) under the popular semi-supervised setting.
We propose an efficient ensemble learner--E2GNN to assemble multiple GNNs in a learnable way by leveraging both labeled and unlabeled nodes.
Comprehensive experiments over both transductive and inductive settings, across different GNN backbones and 8 benchmark datasets, demonstrate the superiority of E2GNN.
arXiv Detail & Related papers (2024-05-06T12:11:46Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs [20.197085398581397]
Graph neural networks (GNNs) have received much attention recently because of their excellent performance on graph-based tasks.
We propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models.
SEG consistently improves the performance of well-known GNN models such as GCN, GAT and SGC across different datasets.
arXiv Detail & Related papers (2020-02-18T12:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.